In the Getting started with git we learned about local and remote branches (
origin/master respectively), and in Git: Keeping in sync we learned how to keep the two branches in sync. This is all great stuff, but if you’re working in a team and/or on a serious project, using only the
master branch is not a good idea. The reason being that if
master is having development code pushed to it continuously, it will never be stable.
What you should do instead, at a minimum, is have two branches. For example,
dev is used for features which are being developed and
master is used for production code. Once the in-development features have been tested and are ready for production, they can then be merged into
master. By employing this method
master will always be in a stable state.
In the Getting start with git post we covered a number of things, one of which was using
push to send our commits to a remote git repo. This is works fine when both of these conditions are met:
- You’re the only person working on the project
- You’re doing all of your development from the same machine
If one of these points is not true, you’ll soon find the
push command fails to work. The reason for this is because you must first retrieve all of the commits from the remote branch before you can merge your own. In other words, you must first be in sync before you can make modifications.
Therefore if someone does a
push before you, and/or you do a
push from a different machine, the machine you’re currently using will be out of sync with the remote branch. As a result you will be unable to do a
push until you first re-sync with the remote branch.
Note: The reason why this issue is not encountered when you meet the two criterion listed above is because your machine will always be in sync with the remote branch given that it’s the only one doing any commits.
Let’s now run through an example to see this in action.
What is git?
Wikipedia has a great answer for this question:
Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source code management in software development, but it can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.
OK cool, now that we know what is git let’s now take a look at git repositories.
What is a Repository (repo)?
A repository is receptacle for files which are part of a project. Each project should be stored in a separate repository so that their files are kept separate, access to them can be administered separately, etc.
Creating a new repo
When you create a new repo using a git server provided by the likes of GitHub and GitLab, you will be given a few options to help you get started. One of these options is as follows:
git clone email@example.com:OzNetNerd/git-tutorial.git
git add README.md
git commit -m "add README"
git push -u origin master
In my previous post I touched on the basics of how you can use pytest to test your code. In this post I’ll be covering how you can use Allure2 to prettify your pytest results.
Allure2 Adapter for pytest
The first thing we need to is install the Allure adapter for Pytest. As the documentation states, this repository contains a plugin for py.test which automatically prepares input data used to generate Allure Report.
Issue the following command to install the adapter:
sudo pip install allure-pytest
If you’re fairly new to coding chances are you’ve run into an issue where you make a minor change in one place, and then end up breaking your script in another place. In order to find out what went wrong you start adding
print statements all over the place to debug your code.
While it sound like a good idea, what you’re actually doing is relying on Python to tell you when you’ve made a syntactical error. However, what if your syntax is find, but your code is incorrect?
For example, say you accidentally changed your
addition function to a multiplication function by replacing the
+ with a
def addition(a, b):
return a * b
In my previous post, Python: Scope, I touched on the topic of Shadowing. In this post I’ll be delving deeper into it.
As Wikipedia says, variable shadowing occurs when a variable declared within a certain scope (decision block, method, or inner class) has the same name as a variable declared in an outer scope.
There are some interesting debates on whether shadowing is a bad thing or not in this StackOverflow Q&A as well as this one. In a nutshell, there are three trains of thought:
- It’s fine to use shadowing.
- You should avoid shadowing by ensuring all names are unique.
- You should avoid shadowing by using functions.
Let’s now run through each of these options to see how they work.
Scope is the term used to define the location(s) in which Python searches for a name to object mapping (e.g a variable).
As described in this StackOverflow post, Python uses the LEGB Rule to locate a definition. LEGB stands for:
- L, Local — Names assigned in any way within a function (
lambda)), and not declared global in that function.
- E, Enclosing-function locals — Name in the local scope of any and all statically enclosing functions (
lambda), from inner to outer.
- G, Global (module) — Names assigned at the top-level of a module file, or by executing a
global statement in a
def within the file.
- B, Built-in (Python) — Names preassigned in the built-in names module
In a nutshell, Python will first look at the local scope for a name to object mapping (e.g
people = 5). If it cannot find one, it will continue going up the hierarchy until it finds one. If it doesn’t find a mapping, it will raise an exception.
To shed some more light on this let’s take a step back and analyse each of the points listed above. I’ll do so in reverse order because that is the way we write Python code, as you’ll see in a moment.
A lot of Python books often mention that “everything in Python is an object”, and “objects are first class citizens”, but they don’t always explain what that these things actually mean. Let’s try to fix that up now.
Everything in Python is an Object
Dive Into Python gives a great explanation:
Different programming languages define “object” in different ways. In some, it means that all objects must have attributes and methods; in others, it means that all objects are subclassable. In Python, the definition is looser; some objects have neither attributes nor methods (more on this in Chapter 3), and not all objects are subclassable (more on this in Chapter 5). But everything is an object in the sense that it can be assigned to a variable or passed as an argument to a function (more in this in Chapter 4).
This is so important that I’m going to repeat it in case you missed it the first few times: everything in Python is an object. Strings are objects. Lists are objects. Functions are objects. Even modules are objects.
There are plenty of articles on the internet that attempt to explain what
if __name__ == "__main__" is and what it does, but (in my humble opinion), the examples are too complex more often than not. With that in mind, this post is aimed at being the most simplest explanation on the planet! :)
What does it do?
This statement is used when you want your code to be used as both a standalone script, as well as a module that can be imported and used by other scripts.
For example, if it is run as a standalone script, you may want to provide a menu to ensure users input all of the necessary information. On the other hand, if it is being imported as a module, perhaps you’d like to avoid the menu all together and instead only use the functions contained the script (e.g
func1 in the example below).
I first mentioned Telegraf in the My Monitoring Journey: Cacti, Graphite, Grafana & Chronograf post and then covered its installation and setup in the Installing & Setting up InfluxDB, Telegraf & Grafana post. Let’s now delve a little deeper, shall we?
The good news is that there’s a lot less to Telegraf’s configuration than what there is to InfluxDB so you’ll likely find this post easier to follow than the Getting to know InfluxDB and article.
What is it?
Before diving into configurations, it would be best to first cover off what Telegraf actually is. To quote the Telegraf GitHub page:
Telegraf is an agent written in Go for collecting, processing, aggregating, and writing metrics.
Design goals are to have a minimal memory footprint with a plugin system so that developers in the community can easily add support for collecting metrics from well known services (like Hadoop, Postgres, or Redis) and third party APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).