A successful Git branching model

By Vincent Driessen
on Tuesday, January 05, 2010

Check out my latest post: An Intro to Decoders

In this post I present the development model that I’ve introduced for some of my projects (both at work and private) about a year ago, and which has turned out to be very successful. I’ve been meaning to write about it for a while now, but I’ve never really found the time to do so thoroughly, until now. I won’t talk about any of the projects’ details, merely about the branching strategy and release management.

Why git?

For a thorough discussion on the pros and cons of Git compared to centralized source code control systems, see the web. There are plenty of flame wars going on there. As a developer, I prefer Git above all other tools around today. Git really changed the way developers think of merging and branching. From the classic CVS/Subversion world I came from, merging/branching has always been considered a bit scary (“beware of merge conflicts, they bite you!”) and something you only do every once in a while.

But with Git, these actions are extremely cheap and simple, and they are considered one of the core parts of your daily workflow, really. For example, in CVS/Subversion books, branching and merging is first discussed in the later chapters (for advanced users), while in every Git book, it’s already covered in chapter 3 (basics).

As a consequence of its simplicity and repetitive nature, branching and merging are no longer something to be afraid of. Version control tools are supposed to assist in branching/merging more than anything else.

Enough about the tools, let’s head onto the development model. The model that I’m going to present here is essentially no more than a set of procedures that every team member has to follow in order to come to a managed software development process.

Decentralized but centralized

The repository setup that we use and that works well with this branching model, is that with a central “truth” repo. Note that this repo is only considered to be the central one (since Git is a DVCS, there is no such thing as a central repo at a technical level). We will refer to this repo as origin, since this name is familiar to all Git users.

Each developer pulls and pushes to origin. But besides the centralized push-pull relationships, each developer may also pull changes from other peers to form sub teams. For example, this might be useful to work together with two or more developers on a big new feature, before pushing the work in progress to origin prematurely. In the figure above, there are subteams of Alice and Bob, Alice and David, and Clair and David.

Technically, this means nothing more than that Alice has defined a Git remote, named bob, pointing to Bob’s repository, and vice versa.

The main branches

At the core, the development model is greatly inspired by existing models out there. The central repo holds two main branches with an infinite lifetime:

  • master
  • develop

The master branch at origin should be familiar to every Git user. Parallel to the master branch, another branch exists called develop.

We consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.

We consider origin/develop to be the main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.

When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number. How this is done in detail will be discussed further on.

Therefore, each time when changes are merged back into master, this is a new production release by definition. We tend to be very strict at this, so that theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers everytime there was a commit on master.

Supporting branches

Next to the main branches master and develop, our development model uses a variety of supporting branches to aid parallel development between team members, ease tracking of features, prepare for production releases and to assist in quickly fixing live production problems. Unlike the main branches, these branches always have a limited life time, since they will be removed eventually.

The different types of branches we may use are:

  • Feature branches
  • Release branches
  • Hotfix branches

Each of these branches have a specific purpose and are bound to strict rules as to which branches may be their originating branch and which branches must be their merge targets. We will walk through them in a minute.

By no means are these branches “special” from a technical perspective. The branch types are categorized by how we use them. They are of course plain old Git branches.

Feature branches

May branch off from:
develop
Must merge back into:
develop
Branch naming convention:
anything except master, develop, release-*, or hotfix-*

Feature branches (or sometimes called topic branches) are used to develop new features for the upcoming or a distant future release. When starting development of a feature, the target release in which this feature will be incorporated may well be unknown at that point. The essence of a feature branch is that it exists as long as the feature is in development, but will eventually be merged back into develop (to definitely add the new feature to the upcoming release) or discarded (in case of a disappointing experiment).

Feature branches typically exist in developer repos only, not in origin.

Creating a feature branch

When starting work on a new feature, branch off from the develop branch.

$ git checkout -b myfeature develop
Switched to a new branch "myfeature"

Incorporating a finished feature on develop

Finished features may be merged into the develop branch to definitely add them to the upcoming release:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff myfeature
Updating ea1b82a..05e9557
(Summary of changes)
$ git branch -d myfeature
Deleted branch myfeature (was 05e9557).
$ git push origin develop

The --no-ff flag causes the merge to always create a new commit object, even if the merge could be performed with a fast-forward. This avoids losing information about the historical existence of a feature branch and groups together all commits that together added the feature. Compare:

In the latter case, it is impossible to see from the Git history which of the commit objects together have implemented a feature—you would have to manually read all the log messages. Reverting a whole feature (i.e. a group of commits), is a true headache in the latter situation, whereas it is easily done if the --no-ff flag was used.

Yes, it will create a few more (empty) commit objects, but the gain is much bigger than the cost.

Release branches

May branch off from:
develop
Must merge back into:
develop and master
Branch naming convention:
release-*

Release branches support preparation of a new production release. They allow for last-minute dotting of i’s and crossing t’s. Furthermore, they allow for minor bug fixes and preparing meta-data for a release (version number, build dates, etc.). By doing all of this work on a release branch, the develop branch is cleared to receive features for the next big release.

The key moment to branch off a new release branch from develop is when develop (almost) reflects the desired state of the new release. At least all features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off.

It is exactly at the start of a release branch that the upcoming release gets assigned a version number—not any earlier. Up until that moment, the develop branch reflected changes for the “next release”, but it is unclear whether that “next release” will eventually become 0.3 or 1.0, until the release branch is started. That decision is made on the start of the release branch and is carried out by the project’s rules on version number bumping.

Creating a release branch

Release branches are created from the develop branch. For example, say version 1.1.5 is the current production release and we have a big release coming up. The state of develop is ready for the “next release” and we have decided that this will become version 1.2 (rather than 1.1.6 or 2.0). So we branch off and give the release branch a name reflecting the new version number:

$ git checkout -b release-1.2 develop
Switched to a new branch "release-1.2"
$ ./bump-version.sh 1.2
Files modified successfully, version bumped to 1.2.
$ git commit -a -m "Bumped version number to 1.2"
[release-1.2 74d9424] Bumped version number to 1.2
1 files changed, 1 insertions(+), 1 deletions(-)

After creating a new branch and switching to it, we bump the version number. Here, bump-version.sh is a fictional shell script that changes some files in the working copy to reflect the new version. (This can of course be a manual change—the point being that some files change.) Then, the bumped version number is committed.

This new branch may exist there for a while, until the release may be rolled out definitely. During that time, bug fixes may be applied in this branch (rather than on the develop branch). Adding large new features here is strictly prohibited. They must be merged into develop, and therefore, wait for the next big release.

Finishing a release branch

When the state of the release branch is ready to become a real release, some actions need to be carried out. First, the release branch is merged into master (since every commit on master is a new release by definition, remember). Next, that commit on master must be tagged for easy future reference to this historical version. Finally, the changes made on the release branch need to be merged back into develop, so that future releases also contain these bug fixes.

The first two steps in Git:

$ git checkout master
Switched to branch 'master'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2

The release is now done, and tagged for future reference.

Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.

To keep the changes made in the release branch, we need to merge those back into develop, though. In Git:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)

This step may well lead to a merge conflict (probably even, since we have changed the version number). If so, fix it and commit.

Now we are really done and the release branch may be removed, since we don’t need it anymore:

$ git branch -d release-1.2
Deleted branch release-1.2 (was ff452fe).

Hotfix branches

May branch off from:
master
Must merge back into:
develop and master
Branch naming convention:
hotfix-*

Hotfix branches are very much like release branches in that they are also meant to prepare for a new production release, albeit unplanned. They arise from the necessity to act immediately upon an undesired state of a live production version. When a critical bug in a production version must be resolved immediately, a hotfix branch may be branched off from the corresponding tag on the master branch that marks the production version.

The essence is that work of team members (on the develop branch) can continue, while another person is preparing a quick production fix.

Creating the hotfix branch

Hotfix branches are created from the master branch. For example, say version 1.2 is the current production release running live and causing troubles due to a severe bug. But changes on develop are yet unstable. We may then branch off a hotfix branch and start fixing the problem:

$ git checkout -b hotfix-1.2.1 master
Switched to a new branch "hotfix-1.2.1"
$ ./bump-version.sh 1.2.1
Files modified successfully, version bumped to 1.2.1.
$ git commit -a -m "Bumped version number to 1.2.1"
[hotfix-1.2.1 41e61bb] Bumped version number to 1.2.1
1 files changed, 1 insertions(+), 1 deletions(-)

Don’t forget to bump the version number after branching off!

Then, fix the bug and commit the fix in one or more separate commits.

$ git commit -m "Fixed severe production problem"
[hotfix-1.2.1 abbe5d6] Fixed severe production problem
5 files changed, 32 insertions(+), 17 deletions(-)

Finishing a hotfix branch

When finished, the bugfix needs to be merged back into master, but also needs to be merged back into develop, in order to safeguard that the bugfix is included in the next release as well. This is completely similar to how release branches are finished.

First, update master and tag the release.

$ git checkout master
Switched to branch 'master'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2.1

Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.

Next, include the bugfix in develop, too:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)

The one exception to the rule here is that, when a release branch currently exists, the hotfix changes need to be merged into that release branch, instead of develop. Back-merging the bugfix into the release branch will eventually result in the bugfix being merged into develop too, when the release branch is finished. (If work in develop immediately requires this bugfix and cannot wait for the release branch to be finished, you may safely merge the bugfix into develop now already as well.)

Finally, remove the temporary branch:

$ git branch -d hotfix-1.2.1
Deleted branch hotfix-1.2.1 (was abbe5d6).

Summary

While there is nothing really shocking new to this branching model, the “big picture” figure that this post began with has turned out to be tremendously useful in our projects. It forms an elegant mental model that is easy to comprehend and allows team members to develop a shared understanding of the branching and releasing processes.

A high-quality PDF version of the figure is provided here. Go ahead and hang it on the wall for quick reference at any time.

Update: And for anyone who requested it: here’s the gitflow-model.src.key of the main diagram image (Apple Keynote).


Git-branching-model.pdf

Git-branching-model

If you want to get in touch, I’m @nvie on Twitter.

How to calculate Pi

How to Calculate Pi, π, Simpson’s Rule
Nearly everyone knows that Pi or π is a peculiar number that is a little more than three. Most of those people know that the number has importance because it is the ratio of the distance around the perimeter (circumference) of a circle when compared to that circle’s diameter. It is what is called an irrational number, where no fraction can be written that equals it, and in fact, the decimal numbers needed to precisely define Pi would go on forever. So how is that number determined?
We will present two different methods of calculating it here. The first could be called the hard way, and it is the approach that ancient Greeks used 2300 years ago to first determine a value for Pi. The second is a much more sophisticated approach which relies both on Calculus and on something called Simpson’s Rule, to arrive at a far more precise value rather simply and easily.

Verder:

The Original Way of Determining Pi
First, let us define a circle that has what is called Unity diameter. That is, it has a Radius = 0.5 and a Diameter = 1.00.
We could INSCRIBE an equilateral triangle inside the circle, where the points/corners just touch the circle. We could carefully measure the length of the sides of that triangle, and would find that they are each slightly over 0.866 units long. With the triangle having three sides, the total perimeter of the triangle is therefore about 2.6 units. We can see that the distance around the circle is greater than this, in other words, Pi must be greater than 2.6. In the same way, we could draw an equilateral triangle which is larger, where the midpoints of the sides each exactly touch the circle, and we can measure the length of those sides to be around 1.732 units. Again, with three sides, we have a total perimeter of this triangle to be around 5.2 units, so we know the distance around the circle must be less than 5.2.
Now, if we do the same thing using squares instead, the larger number of sides more closely follows the shape of the circle and we get better results, indicating that Pi must be between 2.83 and 4.00. If we use five-sided pentagons instead, the result is better yet, Pi being between 2.94 and 3.63. By using six-sided hexagons, Pi is shown to be between 3.00 and 3.46.
For the ancient Greeks, this proceeded fairly well, but it took a lot of time and effort and it required really accurate measurements of the lengths of the sides of the regular polygons, and also really accurate drawings of those polygons so that they truly were Regular (all equal sided). However, the process was continued (skipping many numbers of sides) up to 120 sides. If you think about it, a 120-sided inscribed polygon would clearly very closely resemble the shape of the circle, and would therefore closely indicate the value of Pi. In fact, by using 120-sided polygons, we can determine that Pi must be between 3.1412 and 3.1423, decently close to the 3.1416 that we all know. In fact, if you average the two values (based on lower limit and upper limit) you get 3.1418. a value that is quite close!
However, that value is not close enough for modern Engineering requirements! Which is why the advanced approach presented below is now considered far better. However, here is a chart of the (measured and calculated) values for various numbers of sides for the polygons. Note that if we do this for 2000-sided polygons, the value becomes quite close. However, the process of doing that measurement is extremely difficult for such short polygon sides, when the measured dimension must be to an accuracy of better than one-part-in-a-million!
Number
of sides inside
one side inside
total outside
one side outside
total average
in/out
3 0.866025 2.598076 1.732051 5.196152 3.897114
4 0.707107 2.828427 1.000000 4.000000 3.414214
5 0.587785 2.938926 0.726543 3.632713 3.285820
6 0.500000 3.000000 0.577350 3.464102 3.232051
7 0.433884 3.037186 0.481575 3.371022 3.204104
8 0.382683 3.061467 0.414214 3.313709 3.187588
9 0.342020 3.078181 0.363970 3.275732 3.176957
10 0.309017 3.090170 0.324920 3.249197 3.169683
11 0.281733 3.099058 0.293626 3.229891 3.164475
12 0.258819 3.105829 0.267949 3.215390 3.160609
13 0.239316 3.111104 0.246478 3.204212 3.157658
14 0.222521 3.115293 0.228243 3.195409 3.155351
15 0.207912 3.118675 0.212557 3.188348 3.153512
16 0.195090 3.121445 0.198912 3.182598 3.152021
17 0.183750 3.123742 0.186932 3.177851 3.150796
18 0.173648 3.125667 0.176327 3.173886 3.149776
19 0.164595 3.127297 0.166870 3.170539 3.148918
20 0.156434 3.128689 0.158384 3.167689 3.148189
120 0.026177 3.141234 0.026186 3.142311 3.141772
480 0.006545 3.141570 0.006545 3.141637 3.141604
2000 0.001571 3.141591 0.001571 3.141595 3.141593

The Improved Way of Determining Pi
This will be kept fairly painless! Actually, you do not need to know any Calculus, or even anything beyond simple adding, multiplying and dividing to do this! These next few paragraphs just explain the basis of why this works, which is because of some results in Calculus.
We first need to note that the definition of Pi is the diameter times Pi giving the circumference of any circle. That means that the circumference is equal to 2 * Pi, so half a circle or 180 degrees equals Pi (usually said to be Pi radians).
It turns out to be fairly easily provable in Calculus that the Derivative of the Inverse Tangent (a trigonometry term) is equal to 1/(1 + X2). Since both the Tangent and its Derivative are continuous functions (except at specific points, which we will avoid), that means that the ANTI-Derivative of 1/(1 + X2) is the Inverse Tangent. For a continuous function, the Anti-Derivative is the same as the Integral, so this means that the Integral of 1/(1 + X2) is equal to the Inverse Tangent (over a given interval of angles).
We can select a specific range of angles, and for simplicity we select from zero to the angle which has a tangent of exactly 1, which is the angle that we often call 45 degrees. So if we can just evaluate that quantity 1/(1 + X2) over the range of X equals 0 to 1, and add it all up (as a Calculus Integral does), we would then have a result that equalled the difference which is just the angle between the two angles. In Trigonometry, the circumference of a circle is given as 2 * Pi * R, where Pi therefore represents 180 degrees. Therefore our 45 degree range is just Pi/4, exactly.
Therefore, by evaluating the Integral of our 1/(1 + X2) over the range of 0 to 1, we would get a result that was exactly equal to Pi/4. We’re getting somewhere!
There is a well proven rule in mathematics called Simpson’s Rule. It is actually an approximation, but a really good one, which is essentially based on the fact that if any three points of a curve are known, a unique parabolic curve can be drawn which passes through them, and so the simple quadratic formula for a parabola then gives a very good estimate for the curve in that section. In any case, Simpson’s Rule is fairly simple, when the points on the curve are equally spaced along a coordinate, and that there are an even number of intervals between those points. We will use the simple example of four intervals, or 5 data points here.
Our whole range is from 0 to 1, so our interval must be exactly 1/4, so we have values for X of 0, 1/4, 1/2, 3/4, and 1. We can easily calculate our 1/(1 + X2) for each of these values, to get 1, 16/17, 4/5, 16/25, and 1/2. Simpson’s Rule is actually very simple, where these various terms get multiplied by either 1, 2, or 4, and added together. I am going to make you find some textbook for the exact pattern, but it is really extremely simple. If presented here, it might distract from the central point! (The following tables do indicate those multipliers)
We can present our data here in a small table:
Number of divisions = 4
value of X Quantity
Calculated
1/(1 + X2) . Running Total
of multipled
Quantities
0.00 1.0000000 1 1.0000000
0.25 0.9411765 4 4.7647059
0.50 0.8000000 2 6.3647059
0.75 0.6400000 4 8.9247059
1.00 0.5000000 1 9.4247059
According to Simpson’s Rule we now need to divide this by 3 and multiply by the size of our intervals (1/4), in other words, in this case, dividing by 12. We then get a result of 0.7853921569.
This value is then equal to the number of radians in 45 degrees. To get the number of radians in 180 degrees, in other words, Pi, we just multiply by four. We then get 3.1415686275
Given how simple this was to do, easily done with pencil and paper, it is pretty impressive that we get a result that is surprisingly precise!
So now we decide to use six intervals instead of four!
Number of divisions = 6
value of X Quantity
Calculated
1/(1 + X2) . Running Total
of multipled
Quantities
0 1.0000000 1 1.0000000
1/6 0.9729730 4 4.8918919
1/3 0.9000000 2 6.6918919
1/2 0.8000000 4 9.8918919
2/3 0.6923077 2 11.2765073
5/6 0.5901639 4 13.6371630
1 0.5000000 1 14.1371630
We must now divide this by 3 and multiply by 1/6, or actually, divide by 18, to get: 0.7853979452
Multiplying this by four give 3.1415917809 as even a much better value for Pi.
It seems important to again note that this is a simple pencil and paper calculation that only involves simple addition, multiplication and division, and no exotic math stuff! Impressive, huh?
Well, you are free to invest maybe an hour in doing this calculation for 26 intervals. You will get a result of 0.7853981634 and then 3.1415926535 for the value of Pi, which is accurate to all ten of these decimal points!

So, just in case you had thought that the original ancient Greek approach was still used, with polygons having billions of sides, in determining the extremely accurate values for Pi (sometimes given to 100 or one million decimals), now you know how it is actually done! Setting up a computer to do these simple additions, multiplications and divisions is pretty easy, and the only limitation then is the accuracy of the values of the numbers used in the computer. If you use a computer system that has 40 significant digits, even YOU could now quickly calculate Pi to 40 decimals!

Yet another modern way!
It turns out that there is a way to do this without having to rely on Simpson’s Rule. There is another Calculus proof that the quantity the Integral of 1/(1 + X2), or the Inverse Tangent of X, we used above can be expressed as an alternating infinite series, as:
X – X3/3 + X5/5 – X7/7 + X9/9 …
Since our values of X are zero and one, this simplifies. When X = 0 it is obvious that this is a sum of a lot of zeroes, or zero. For X = 1, this is just:
1 – 1/3 + 1/5 – 1/7 + 1/9 …
So the difference, the number we want that is Pi/4 is obviously just this last infinite series total. Sounds easy, huh? It is, except that you have to include a LOT of terms to get a very accurate value for Pi!
If you do this for a sum of the terms up to 1/41 (20 terms), you get a value for Pi of 3.0961615.
If you do this for a sum of the terms up to 1/4001 (2000 terms), you get a value for Pi of 3.1410931.
If you do this for a sum of the terms up to 1/400001 (200,000 terms), you get a value for Pi of 3.1415876.
If you do this for a sum of the terms up to 1/4000001 (2,000,000 terms), you get a value for Pi of 3.14159215.
If you do this for a sum of the terms up to 1/40000001 (20,000,000 terms), you get a value for Pi of 3.14159260.
That would be a LOT of additions and subtractions to get a value for Pi that still is not very impressive! We noted above that the actual value for Pi to ten decimals is 3.1415926535, so with this other method, our 20 million additions and subtractions still only get a precision to around 7 correct decimals. Not nearly as good as the Simpson’s Rule method above, even though it initially looks very attractive!
But we are showing that there are many ways to skin a cat! (figuratively speaking, of course!)

This presentation was first placed on the Internet in November 2006.
ImageThis page – – How to Calculate Pi, π, Simpson’s Rule – – is at http://mb-soft.com/public3/pi.html
This subject presentation was last updated on 01/12/2011 03:40:26

Link to the Index of these Public Service Pages
Image( http://mb-soft.com/index.html )

Useful SQL Server DBCC Commands

Handige link:
http://www.sql-server-performance.com/tips/dbcc_commands_p1.aspx
if the link should fail here is the content:

verder:
Useful SQL Server DBCC Commands
By : Brad McGehee

DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.Example:
DBCC CACHESTATS
Sample Results (abbreviated):
Object Name Hit Ratio
———— ————-
Proc 0.86420054765378507
Prepared 0.99988494930394334
Adhoc 0.93237136647793051
ReplProc 0.0
Trigger 0.99843452831887947
Cursor 0.42319205924058612
Exec Cxt 0.65279111666076906
View 0.95740334726893905
Default 0.60895011346896522
UsrTab 0.94985969576133511
SysTab 0.0
Check 0.67021276595744683
Rule 0.0
Summary 0.80056155581812771
Here’s what some of the key statistics from this command mean:
⦁ Hit Ratio: Displays the percentage of time that this particular object was found in SQL Server’s cache. The bigger this number, the better.
⦁ Object Count: Displays the total number of objects of the specified type that are cached.
⦁ Avg. Cost: A value used by SQL Server that measures how long it takes to compile a plan, along with the amount of memory needed by the plan. This value is used by SQL Server to determine if the plan should be cached or not.
⦁ Avg. Pages: Measures the total number of 8K pages used, on average, for cached objects.
⦁ LW Object Count, LW Avg Cost, WL Avg Stay, LW Ave Use: All these columns indicate how many of the specified objects have been removed from the cache by the Lazy Writer. The lower the figure, the better.
[7.0, 2000] Updated 9-1-2005*****DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.Example:
DBCC DROPCLEANBUFFERS
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC ERRORLOG: If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.Example:
DBCC ERRORLOG
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server. The database ID number to be affected must be entered as part of the command.You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = ‘database_name’)
DBCC FLUSHPROCINDB (@intDBID)
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn’t do a great job of logical defragmentation.The only way to truly reduce logical fragmentation is to rebuild your table’s indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.Example:
DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)
[2000] Updated 9-1-2005

DBCC FREEPROCCACHE: Used to clear out the stored procedure cache for all SQL Server databases. You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DBCC FREEPROCCACHE
[7.0, 2000, 2005] Updated 10-16-2005*****DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity. This is an undocumented command, and one that may be dropped in future versions of SQL Server.Example:
DBCC MEMORYSTATUS
[7.0, 2000] Updated 10-16-2005*****DBCC OPENTRAN: An open transaction can leave locks open, preventing others from accessing the data they need in a database. This command is used to identify the oldest open transaction in a specific database.Example:
DBCC OPENTRAN(‘database_name’)
[7.0, 2000] Updated 10-16-2005*****DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.Example:
DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])
where:Dbid or dbname: Enter either the dbid or the name of the database in question.Pagenum: Enter the page number of the SQL Server page that is to be examined.Print option: (Optional) Print option can be either 0, 1, or 2. 0 – (Default) This option causes DBCC PAGE to print out only the page header information. 1 – This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page’s offset table. Each of the rows printed out will be separated from each other. 2 – This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 – This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 – (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 – If the page is to be a virtual page number. 1 – (Default) If the page is the logical page number.

[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server’s performance would be increased because I/O could be reduced on the server.The process of “pinning a table” is a way to tell SQL Server that we don’t want it to flush out data pages for specific named tables once they are read into the cache in the first place. This in effect keeps these database pages in the data cache all the time, which eliminates the process of SQL Server from having to read the data pages, flush them out, and reread them again when the time arrives. As you can imagine, this can reduce I/O for these pinned tables, boosting SQL Server’s performance.To pin a table, the command DBCC PINTABLE is used. For example, the script below can be run to pin a table in SQL Server:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC PINTABLE (@db_id, @tbl_id)
While you can use the DBCC PINTABLE directly, without the rest of the above script, you will find the script handy because the DBCC PINTABLE’s parameters refer to the database and table ID that you want to pin, not by their database and table name. This script makes it a little easier to pin a table. You must run this command for every table you want to pin.Once a table is pinned in the data cache, this does not mean that the entire table is automatically loaded into the data cache. It only means that as data pages from that table are needed by SQL Server, they are loaded into the data cache, and then stay there, not ever being flushed out to disk until you give the command to unpin the table using the DBCC UNPINTABLE. It is possible that part of a table, and not all of it, will be all that is pinned.When you are done with a table and you no longer want it pinned, you will want to unpin your table. To do so, run this example code:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC UNPINTABLE (@db_id, @tbl_id)
[6.5, 7.0, 2000]Updated 10-16-2005

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.Example:
DBCC PROCCACHE
[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.Database reorganizations can be done byscheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).Unfortunately, the DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease.Example:
DBCC DBREINDEX(‘table_name’, fillfactor)
or
–Script to automatically reindex all tables in a database

USE DatabaseName –Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT “Reindexing ” + @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fillfactor of 90%. You can substitute any number you want for the 90 in the above script. When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don’t need access to the tables being reorganized. [7.0, 2000]Updated 10-16-2005*****DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don’t have to manually look up the table name ID number and the index ID number.Example:
DBCC SHOWCONTIG (Table_id, IndexID)
Or:
–Script to identify table fragmentation

–Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)

–Set the table and index to be examined
SELECT @IndexName = ‘index_name’ –enter name of index
SET @ID = OBJECT_ID(‘table_name’) –enter name of table

–Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName

–Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.Example:
DBCC SHOW_STATISTICS (table_name, index_name)
[7.0, 2000] Updated 3-20-2006

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.Example:
DBCC SQLMGRSTATS
Sample Results:
Item Status
————————- ———–
Memory Used (8k Pages) 5446
Number CSql Objects 29098
Number False Hits 425490
Here’s what the above means:
⦁ Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.
⦁ Number CSql Objects: Measures the total number of cached Transact-SQL statements.
⦁ Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.
[2000] Added 4-17-2003*****DBCC SQLPERF(): This command includes both documented and undocumented options. Let’s take a look at all of them and see what they do.
DBCC SQLPERF (LOGSPACE)
This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.
DBCC SQLPERF (UMSSTATS)
This option (undocumented) returns data about SQL Server thread management.
DBCC SQLPERF (WAITSTATS)
This option (undocumented) returns data about wait types for SQL Server resources.
DBCC SQLPERF (IOSTATS)
This option (undocumented) returns data about outstanding SQL Server reads and writes.
DBCC SQLPERF (RASTATS)
This option (undocumented) returns data about SQL Server read-ahead activity.
DBCC SQLPERF (THREADS)
This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread. [7.0, 2000] Updated 3-20-2006*****DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)Statistic Value
——————————– ————————
Scheduler ID 0.0
num users 18.0
num runnable 0.0
num workers 13.0
idle workers 11.0
work queued 0.0
cntxt switches 2.2994396E+7
cntxt switches(idle) 1.7793976E+7
Scheduler ID 1.0
num users 15.0
num runnable 0.0
num workers 13.0
idle workers 10.0
work queued 0.0
cntxt switches 2.4836728E+7
cntxt switches(idle) 1.6275707E+7
Scheduler ID 2.0
num users 17.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1331447E+7
cntxt switches(idle) 1.6273097E+7
Scheduler ID 3.0
num users 16.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1110251E+7
cntxt switches(idle) 1.624729E+7
Scheduler Switches 0.0
Total Work 3.1632352E+7Below is an explanation of some of the key statistics above:
⦁ num users: This is the number of SQL Server threads currently in the scheduler.
⦁ num runnable: This is the number of actual SQL Server threads that are runnable.
⦁ num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.
⦁ idle workers: The number of workers that are currently idle.
⦁ cntxt switches: The number of context switches between runnable threads.
⦁ cntxt switches (idle): The number of context switches to the idle thread.
[2000] Added 4-17-2003

DBCC TRACEON & DBCC TRACEOFF: Used to turn on and off trace flags. Trace flags are often used to turn on and off specific server behavior or server characteristics temporarily. In rare occasions, they can be useful to troubleshooting SQL Server performance problems.Example:To use the DBCC TRACEON command to turn on a specified trace flag, use this syntax:
DBCC TRACEON (trace# [,…n])
To use the DBCC TRACEON command to turn off a specified trace flag, use this syntax:
DBCC TRACEOFF (trace# [,…n])
You can also use the DBCC TRACESTATUS command to find out which trace flags are currently turned on in your server using this syntax:
DBCC TRACESTATUS (trace# [,…n])
For specific information on the different kinds of trace flags available, search this website or look them up in Books Online. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.Example:
DBCC UPDATEUSAGE (‘databasename’)
[7.0, 2000] Updated 3-20-2006 ImageImage

© 2000 – 2008 vDerivatives Limited All Rights Reserved. Image

Release artifacts and artifact sources

https://docs.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=vsts

18 minutes to read

Contributors

Alex Homer

Steve Danielson

https://github.com/elbatk

David Staheli

Note

Build and release pipelines are called definitions in TFS 2018 and in older versions. Service connections are called service endpoints in TFS 2018 and in older versions.

A release is a collection of artifacts in your DevOps CI/CD processes. An artifact is a deployable component of your application. Azure Pipelines can deploy artifacts that are produced by a wide range of artifact sources, and stored in different types of artifact repositories.

When authoring a release pipeline, you link the appropriate artifact sources to your release pipeline. For example, you might link an Azure Pipelines build pipeline or a Jenkins project to your release pipeline.

When creating a release, you specify the exact version of these artifact sources; for example, the number of a build coming from Azure Pipelines, or the version of a build coming from a Jenkins project.

After a release is created, you cannot change these versions. A release is fundamentally defined by the versioned artifacts that make up the release. As you deploy the release to various stages, you will be deploying and validating the same artifacts in all stages.

A single release pipeline can be linked to multiple artifact sources, of which one is the primary source. In this case, when you create a release, you specify individual versions for each of these sources.

Artifacts in a pipeline and release

Artifacts are central to a number of features in Azure Pipelines. Some of the features that depend on the linking of artifacts to a release pipeline are:

  • Auto-trigger releases. You can configure new releases to be automatically created whenever a new version of an artifact is produced. For more details, see Continuous deployment triggers. Note that the ability to automatically create releases is available for only some artifact sources.
  • Trigger conditions. You can configure a release to be created automatically, or the deployment of a release to a stage to be triggered automatically, when only specific conditions on the artifacts are met. For example, you can configure releases to be automatically created only when a new build is produced from a certain branch.
  • Artifact versions. You can configure a release to automatically use a specific version of the build artifacts, to always use the latest version, or to allow you to specify the version when the release is created.
  • Artifact variables. Every artifact that is part of a release has metadata associated with it, exposed to tasks through variables. This metadata includes the version number of the artifact, the branch of code from which the artifact was produced (in the case of build or source code artifacts), the pipeline that produced the artifact (in the case of build artifacts), and more. This information is accessible in the deployment tasks. For more details, see Artifact variables.
  • Work items and commits. The work items or commits that are part of a release are computed from the versions of artifacts. For example, each build in Azure Pipelines is associated with a set of work items and commits. The work items or commits in a release are computed as the union of all work items and commits of all builds between the current release and the previous release. Note that Azure Pipelines is currently able to compute work items and commits for only certain artifact sources.
  • Artifact download. Whenever a release is deployed to a stage, by default Azure Pipelines automatically downloads all the artifacts in that release to the agent where the deployment job runs. The procedure to download artifacts depends on the type of artifact. For example, Azure Pipelines artifacts are downloaded using an algorithm that downloads multiple files in parallel. Git artifacts are downloaded using Git library functionality. For more details, see Artifact download.

Artifact sources

There are several types of tools you might use in your application lifecycle process to produce or store artifacts. For example, you might use continuous integration systems such as Azure Pipelines, Jenkins, or TeamCity to produce artifacts. You might also use version control systems such as Git or TFVC to store your artifacts. Or you can use repositories such as Package Management in Visual Studio Team Services or a NuGet repository to store your artifacts. You can configure Azure Pipelines to deploy artifacts from all these sources.

By default, a release created from the release pipeline will use the latest version of the artifacts. At the time of linking an artifact source to a release pipeline, you can change this behavior by selecting one of the options to use the latest build from a specific branch by specifying the tags, a specific version, or allow the user to specify the version when the release is created from the pipeline.

Adding an artifact

If you link more than one set of artifacts, you can specify which is the primary (default).

Selecting a default version option

The following sections describe how to work with the different types of artifact sources.


Azure Pipelines

You can link a release pipeline to any of the build pipelines in Azure Pipelines or TFS project collection.

Note

You must include a Publish Artifacts task in your build pipeline. For XAML build pipelines, an artifact with the name drop is published implicitly.

Some of the differences in capabilities between different versions of TFS and Azure Pipelines are:

  • TFS 2015: You can link build pipelines only from the same project of your collection. You can link multiple definitions, but you cannot specify default versions. You can set up a continuous deployment trigger on only one of the definitions. When multiple build pipelines are linked, the latest builds of all the other definitions are used, along with the build that triggered the release creation.
  • TFS 2017 and newer and Azure Pipelines: You can link build pipelines from any of the projects in Azure Pipelines or TFS. You can link multiple build pipelines and specify default values for each of them. You can set up continuous deployment triggers on multiple build sources. When any of the builds completes, it will trigger the creation of a release.

The following features are available when using Azure Pipelines sources:

Feature Behavior with Azure Pipelines sources
Auto-trigger releases New releases can be created automatically when new builds (including XAML builds) are produced. See Continuous Deployment for details. You do not need to configure anything within the build pipeline. See the notes above for differences between version of TFS.
Artifact variables A number of artifact variables are supported for builds from Azure Pipelines.
Work items and commits Azure Pipelines integrates with work items in TFS and Azure Pipelines. These work items are also shown in the details of releases. Azure Pipelines integrates with a number of version control systems such as TFVC and Git, GitHub, Subversion, and external Git repositories. Azure Pipelines shows the commits only when the build is produced from source code in TFVC or Git.
Artifact download By default, build artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.
Deployment section in build The build summary includes a Deployment section, which lists all the stages to which the build was deployed.

TFVC, Git, and GitHub

There are scenarios in which you may want to consume artifacts stored in a version control system directly, without passing them through a build pipeline. For example:

  • You are developing a PHP or a JavaScript application that does not require an explicit build pipeline.
  • You manage configurations for various stages in different version control repositories, and you want to consume these configuration files directly from version control as part of the deployment pipeline.
  • You manage your infrastructure and configuration as code (such as Azure Resource Manager templates) and you want to manage these files in a version control repository.

Because you can configure multiple artifact sources in a single release pipeline, you can link both a build pipeline that produces the binaries of the application as well as a version control repository that stores the configuration files into the same pipeline, and use the two sets of artifacts together while deploying.

Azure Pipelines integrates with Team Foundation Version Control (TFVC) repositories, Git repositories, and GitHub repositories.

You can link a release pipeline to any of the Git or TFVC repositories in any of the projects in your collection (you will need read access to these repositories). No additional setup is required when deploying version control artifacts within the same collection.

When you link a Git or GitHub repository and select a branch, you can edit the default properties of the artifact types after the artifact has been saved. This is particularly useful in scenarios where the branch for the stable version of the artifact changes, and continuous delivery releases should use this branch to obtain newer versions of the artifact. You can also specify details of the checkout, such as whether check out submodules and LFS-tracked files, and the shallow fetch depth.

When you link a TFVC branch, you can specify the changeset to be deployed when creating a release.

The following features are available when using TFVC, Git, and GitHub sources:

Feature Behavior with TFVC, Git, and GitHub sources
Auto-trigger releases You can configure a continuous deployment trigger for pushes into the repository in a release pipeline. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for version control sources.
Work items and commits Azure Pipelines cannot show work items or commits associated with releases when using version control artifacts.
Artifact download By default, version control artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Jenkins

To consume Jenkins artifacts, you must create a service connection with credentials to connect to your Jenkins server. For more details, see service connections and Jenkins service connection. You can then link a Jenkins project to a release pipeline. The Jenkins project must be configured with a post build action to publish the artifacts.

The following features are available when using Jenkins sources:

Feature Behavior with Jenkins sources
Auto-trigger releases You can configure a continuous deployment trigger for pushes into the repository in a release pipeline. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for builds from Jenkins.
Work items and commits Azure Pipelines cannot show work items or commits for Jenkins builds.
Artifact download By default, Jenkins builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Artifacts generated by Jenkins builds are typically propagated to storage repositories for archiving and sharing. Azure blob storage is one of the supported repositories, allowing you to consume Jenkins projects that publish to Azure storage as artifact sources in a release pipeline. Deployments download the artifacts automatically from Azure to the agents. In this configuration, connectivity between the agent and the Jenkins server is not required. Microsoft-hosted agents can be used without exposing the server to internet.

Note

Azure Pipelines may not be able to contact your Jenkins server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with Jenkins by setting up an on-premises agent that can access the Jenkins server. You will not be able to see the name of your Jenkins projects when linking to a build, but you can type this into the link dialog field.

For more information about Jenkins integration capabilities, see Azure Pipelines Integration with Jenkins Jobs, Pipelines, and Artifacts.


Azure Container Registry, Docker, Kubernetes

When deploying containerized apps, the container image is first pushed to a container registry. After the push is complete, the container image can be deployed to the Web App for Containers service or a Docker/Kubernetes cluster. You must create a service connection with credentials to connect to your service to deploy images located there, or to Azure. For more details, see service connections.

The following features are available when using Azure Container Registry, Docker, Kubernetes sources:

Feature Behavior with Docker sources
Auto-trigger releases You can configure a continuous deployment trigger for images. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for builds.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

NuGet and npm packages from Package Management

To integrate with NuGet, or npm (Maven is not currently supported), you must first assign licenses for the Package Management extension from the Marketplace. For more information, see the Package Management Overview.

Scenarios where you may want to consume Package Management artifacts are:

  1. You have your application build (such as TFS, Azure Pipelines, TeamCity, Jenkins) published as a package (NuGet or npm) to Package Management and you want to consume the artifact in a release.
  2. As part of your application deployment, you need additional packages stored in Package Management.

When you link a Package Management artifact to your release pipeline, you must select the Feed, Package, and the Default version for the package. You can choose to pick up the latest version of the package, use a specific version, or select the version at the time of release creation. During deployment, the package is downloaded to the agent folder and the contents are extracted as part of the job execution.

The following features are available when using Package Management sources:

Feature Behavior with Package Management sources
Auto-trigger releases You can configure a continuous deployment trigger for packages. This can automatically trigger a release when a package is updated. See Triggers.
Artifact variables A number of artifact variables are supported for packages.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, packages are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

External or on-premises TFS

You can use Azure Pipelines to deploy artifacts published by an on-premises TFS server. You don’t need to make the TFS server visible on the Internet; you just set up an on-premises automation agent. Builds from an on-premises TFS server are downloaded directly into the on-premises agent, and then deployed to the specified target servers. They will not leave your enterprise network. This allows you to leverage all of your investments in your on-premises TFS server, and take advantage of the release capabilities in Azure Pipelines.

Using this mechanism, you can also deploy artifacts published in one Azure Pipelines subscription in another Azure Pipelines, or deploy artifacts published in one Team Foundation Server from another Team Foundation Server.

To enable these scenarios, you must install the TFS artifacts for Azure Pipelines extension from Visual Studio Marketplace. Then create a service connection with credentials to connect to your TFS server (see service connections for details).

You can then link a TFS build pipeline to your release pipeline. Choose External TFS Build in the Type list.

The following features are available when using external TFS sources:

Feature Behavior with external TFS sources
Auto-trigger releases You cannot configure a continuous deployment trigger for external TFS sources in a release pipeline. To automatically create a new release when a build is complete, you would need to add a script to your build pipeline in the external TFS server to invoke Azure Pipelines REST APIs and to create a new release.
Artifact variables A number of artifact variables are supported for external TFS sources.
Work items and commits Azure Pipelines cannot show work items or commits for external TFS sources.
Artifact download By default, External TFS artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Note

Azure Pipelines may not be able to contact an on-premises TFS server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with TFS by setting up an on-premises agent that can access the TFS server. You will not be able to see the name of your TFS projects or build pipelines when linking to a build, but you can type these into the link dialog fields. In addition, when you create a release, Azure Pipelines may not be able to query the TFS server for the build numbers. Instead, type the Build ID (not the build number) of the desired build into the appropriate field, or select the Latest build.


TeamCity

To integrate with TeamCity, you must first install the TeamCity artifacts for Azure Pipelines extension from Marketplace.

To consume TeamCity artifacts, start by creating a service connection with credentials to connect to your TeamCity server (see service connections for details).

You can then link a TeamCity build configuration to a release pipeline. The TeamCity build configuration must be configured with an action to publish the artifacts.

The following features are available when using TeamCity sources:

Feature Behavior with TeamCity sources
Auto-trigger releases You cannot configure a continuous deployment trigger for TeamCity sources in a release pipeline. To create a new release automatically when a build is complete, add a script to your TeamCity project that invokes the Azure Pipelines REST APIs to create a new release.
Artifact variables A number of artifact variables are supported for builds from TeamCity.
Work items and commits Azure Pipelines cannot show work items or commits for TeamCity builds.
Artifact download By default, TeamCity builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Note

Azure Pipelines may not be able to contact your TeamCity server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with TeamCity by setting up an on-premises agent that can access the TeamCity server. You will not be able to see the name of your TeamCity projects when linking to a build, but you can type this into the link dialog field.


Other sources

Your artifacts may be created and exposed by other types of sources such as a NuGet repository. While we continue to expand the types of artifact sources supported in Azure Pipelines, you can start using it without waiting for support for a specific source type. Simply skip the linking of artifact sources in a release pipeline, and add custom tasks to your stages that download the artifacts directly from your source.


Artifact download

When you deploy a release to a stage, the versioned artifacts from each of the sources are, by default, downloaded to the automation agent so that tasks running within that stage can deploy these artifacts. The artifacts downloaded to the agent are not deleted when a release is completed. However, when you initiate the next release, the downloaded artifacts are deleted and replaced with the new set of artifacts.

A new unique folder in the agent is created for every release pipeline when you initiate a release, and the artifacts are downloaded into that folder. The $(System.DefaultWorkingDirectory) variable maps to this folder.

Note that, at present, Azure Pipelines does not perform any optimization to avoid downloading the unchanged artifacts if the same release is deployed again. In addition, because the previously downloaded contents are always deleted when you initiate a new release, Azure Pipelines cannot perform incremental downloads to the agent.

You can, however, instruct Azure Pipelines to skip the automatic downloadof artifacts to the agent for a specific job and stage of the deployment if you wish. Typically, you will do this when the tasks in that job do not require any artifacts, or if you implement custom code in a task to download the artifacts you require.

In Azure Pipelines, you can, however, select which artifacts you want to download to the agent for a specific job and stage of the deployment. Typically, you will do this to improve the efficiency of the deployment pipeline when the tasks in that job do not require all or any of the artifacts, or if you implement custom code in a task to download the artifacts you require.

Selecting the artifacts to download

Artifact source alias

To ensure the uniqueness of every artifact download, each artifact source linked to a release pipeline is automatically provided with a specific download location known as the source alias. This location can be accessed through the variable:

$(System.DefaultWorkingDirectory)\[source alias]

This uniqueness also ensures that, if you later rename a linked artifact source in its original location (for example, rename a build pipeline in Azure Pipelines or a project in Jenkins), you don’t need to edit the task properties because the download location defined in the agent does not change.

The source alias is, by default, the name of the source selected when you linked the artifact source, prefixed with an underscore; depending on the type of the artifact source this will be the name of the build pipeline, job, project, or repository. You can edit the source alias from the artifacts tab of a release pipeline; for example, when you change the name of the build pipeline and you want to use a source alias that reflects the name of the build pipeline.

The source alias can contain only alphanumeric characters and underscores, and must start with a letter or an underscore

Primary source

When you link multiple artifact sources to a release pipeline, one of them is designated as the primary artifact source. The primary artifact source is used to set a number of pre-defined variables. It can also be used in naming releases.

Artifact variables

Azure Pipelines exposes a set of pre-defined variables that you can access and use in tasks and scripts; for example, when executing PowerShell scripts in deployment jobs. When there are multiple artifact sources linked to a release pipeline, you can access information about each of these. For a list of all pre-defined artifact variables, see variables.

Help and support

How to use an SQL query with parameters in Excel

1 Intro

2 Set up the query to run as a stored procedure

3 Prepare Excel

4 Make ODBC connection to database in SQL

5 Prepare “Microsoft query”

6 Link fields to query parameters

Simple example query:

declare @StartDate datetime set @StartDate = ‘2018-01-01’

declare @EndDate datetime set @EndDate = ‘2018-01-31’

select * from tblOrder where orderdate <= @StartDate and orderdate >= @EndDate

Create and run a script that creates a stored procedure:

CREATE PROCEDURE spSelectOrder

— Add the parameters for the stored procedure here

@StartDate As DateTime,

@EndDate As DateTime

AS

BEGIN

— SET NOCOUNT ON added to prevent extra result sets from

— interfering with SELECT statements.

SET NOCOUNT ON;

— Insert statements for procedure here

select * from tblOrder where orderdate <= @StartDate and

orderdate >= @EndDate

END

GO

This stored procedure can be run in Excel using Microsoft query. To run this query, prepare a worksheet with the parameters filled. These parameters will be used as input for the query later.

Next step is to add the data source to the worksheet. Start Data, “From Other Sources” “From Microsoft Query”. This will start a wizard to create a data connection:

1 Select or create a Data Source

2 The next step in the wizard is Choose Columns. Cancel the wizard on this screen and a question will pop up asking you if you want to continue editing this query. Click Yes.

3 MS Query will be started and a dialog to select tables will be presented. Close this dialog.

4 Click SQL button in button bar or in menu choose View, SQL.

5 Type “call” followed by the stored procedure name and question marks for the parameter input between parentheses. Place this in between curly bracets. These curly bracets are required to avoid syntax check errors.

{call spMassBalans (?, ?)}

6 Press OK and you will be prompted to input values for the required parameters.

The results will be retrieved in a query result window. After the result is presented go to File and click “Return Data to Microsoft Excel”.

Microsoft query will be closed and you will return to Excel with a dialog to import the data. Choose cell A4 in the work sheet.

Again you will be prompted to input values for the parameters. This time you are able to select the cells B1 and B2 as input. Check the checkbox in the dialog to use the reference for future use. If you want you can also check to refresh the data when the cell value changes.

If you want to manually refresh the data you can right-click anywhere in the datagrid and select “refresh”.

How to develop with various app.config files

When I want to test my application in Microsoft Visual Studio I want to use a alternate app.config from production. In my development environment I have other connection strings etc.

To switch between development and production configuration I have come up with the following configuration:

I want to switch configuration in the IDE and use the apropriate app.config file.

For each environment I have added a separate app.config.file to my project.

Next I have added a pre-build.bat and a post-build.bat in the solution.

The pre-build.bat file contains one command to replace app.config for the selected configuration:

copy /y %1\app.config.%2 %1\app.config

The post-build.bat file contains one command to replace the app.config with the release or “production” configuration.

Last step is to tie these components all together. Add a Pre-build and a Post-build event in the project properties page. You should be able to find these settings right clicking the project and click on properties.

The command I added to start pre-build.bat is:

“$(ProjectDir)pre-build.bat” “$(ProjectDir)” “$(ConfigurationName)”

and for the post-build.bat the command is:

“$(ProjectDir)post-build.bat” “$(ProjectDir)”

Continuously Deploy Your ClickOnce Application From Your Build Server

http://blog.danskingdom.com/continuously-deploy-your-clickonce-application-from-your-build-server/ from Daniel Schroeder’s (aka deadlydog) Programming Blog

ClickOnce applications are a great and easy way to distribute your applications to many users, and have the advantage of offering automatic application updates out-of-the-box. Even though ClickOnce applications are super easy to deploy from Visual Studio (literally 3 clicks, just click Build –> Publish –> Finish), you may still want to have your build system publish the updates for various reasons, such as:

You don’t want you (or your team members) to have to remember to manually publish a new version all the time; even if it is very quick and easy.

If you are signing your ClickOnce application (to make your app more secure and avoid annoying Windows security warnings during every update), your team members may not have the certificate installed, or you may not want them to know the certificate password.

All of the other benefits of deploying automatically from a build server; you can be sure the project was built in Release mode, all unit tests have run and passed, you are only publishing a new version of the application from a specific branch (e.g. master), etc.

In this post I’m going to show how to continuously deploy your ClickOnce application using Visual Studio Team Services (VSTS), but you can adopt what’s shown here to work on any build system, such as Team City or Jenkins.

Step 1 – Configure and publish your ClickOnce application manually

Before we can publish a new version of the application from the build server, we first have to build the project to create the artifacts to publish. And even before that, we have to make sure the project is setup properly with all of the required ClickOnce metadata. You will typically do this by going into your project Properties page and going to the Publish tab. There are several other websites and blog posts that discuss configuring a project for ClickOnce deployment, including the official MSDN documentation for configuring and publishing ClickOnce applications, so I won’t go into it any further here.

Basically you should have your project in a state where you can easily publish a new version manually, and it is configured the way that you want (includes the necessary files, has the destination to publish to specified, specifies if the application is only available online or not, etc.). Once you’ve published your project manually and confirmed that it’s configured the way you like, we can move onto the next step.

Step 2 – Setup the build on your build server

On your build server you will want to configure a new vanilla build that builds your project/solution. There are a couple modifications you will need to make that are different from building a regular console app or class library project.

The first difference is that you will need to provide the “/target:Publish” argument to MSBuild when it builds your project. Here is what this looks like in VSTS:

This will cause MSBuild to build the required artifacts into an “app.publish” directory. e.g. bin\Debug\app.publish.

The next difference is that you will want to copy that “app.publish” directory to your build artifacts directory. To do this, you will need to add a Copy Files step into your build process that copies the “app.publish” directory from the ClickOnce project’s bin directory to where the build artifacts are expected to be. You will want to do this before the step that publishes your build artifacts. Here is what this looks like in VSTS:

So we copy the files into the build artifacts directory, and then the Publish Build Artifacts step at the end will copy those files to wherever you’ve specified; in my case it’s a network share.

If you like you can now run the build and see if it succeeds. If the build fails with an error relating to an expired certificate or pfx file, see my other blog post on importing the required certificate on the build server at build-time, which involves adding one more “Import-PfxCertificate.ps1” build step before the MSBuild step.

For completeness sake, this is what my Publish Build Artifacts step looks like in VSTS, and you’ll also notice the “Import-PfxCertificate.ps1” step before the MSBuild step as well:

So we now have the ClickOnce artifacts being generated and stored in the appropriate directory. If you wanted, you could publish the build artifacts to the ClickOnce application’s final destination right now (instead of a file share as I’ve done here), but I’m going to follow best practices and separate the application “build” and “deployment” portions into their respective subsystems, as you may want separate control over when a build gets published, or maybe you don’t want to publish EVERY build; only some of them.

Hopefully at this point you are able to create a successful build, but we’re not done yet.

 Step 3 – Publish the build artifacts to the ClickOnce application’s destination

Now that we have the build artifacts safely stored, we can publish them to the ClickOnce application’s destination. With VSTS this is done by using the Release subsystem. So create a new Release Definition and setup an Environment for it. By default it adds a “Copy and Publish Build Artifacts” step to the release definition. When configuring and using this default step, I received the error:

[error]Copy and Publish Build Artifacts task is not supported within Release

So instead I removed that default step and added a simple “Copy Files” step, and then configured it to grab the files from the file share that the build published the artifacts to, and set the destination to where the ClickOnce application was publishing to when I would do a manual publish from within Visual Studio. Here is what that step looks like in VSTS:

You should be able to run this release definition and see that it is able to post a new version of your ClickOnce application. Hooray! I setup my releases to automatically publish on every build, but you can configure yours however you like.

If you make changes and commit them, the build should create new artifacts, and the release should be able to publish the new version. However, if you launch your ClickOnce application, you may be surprised that it doesn’t actually update to the latest version that you just committed and published. This is because we need one additional build step to actually update the ClickOnce version so it can detect that a new version was published. If you launch another build and publish them, you’ll see that the files being published are overwriting the previous files, instead of getting published to a new directory as they are supposed to be.

You may be wondering why we need another build step to update the ClickOnce version. Part of the build artifacts that are created are a .application manifest file, and a directory that contains the applications files, both of which have the ClickOnce version in them. Can’t we just modify the directory name and .application manifest file to increment the old version number? This was my initial thought, but the .application manifest file contains a cryptography hash to ensure that nobody has tampered with it, meaning that it needs to contain the proper version number at the time that it is generated.

Step 4 – One more build step to update the ClickOnce version

The ClickOnce version is defined in your project file (.csproj/.vbproj) (as the <ApplicationVersion> and <ApplicationRevision> xml elements), and is different from the assembly version that you would typically set to version your .dll/.exe files via the AssemblyInfo.cs file. Unless you manually updated the ClickOnce version in Visual Studio, it will still have its original version. When publishing the ClickOnce version in Visual Studio, Visual Studio automatically takes care of incrementing ClickOnce version’s Revision part. However, because we’re publishing from our build system now, we’ll need to come up with an alternative method. To help solve this problem, I have created the Set-ProjectFilesClickOnceVersion GitHub repository. This repository houses a single Set-ProjectFilesClickOnceVersion.ps1 PowerShell script that we will want to download and add into our build process.

In order for the build system to access this Set-ProjectFilesClickOnceVersion.ps1 script you will need to check it into source control. Also, you need to make sure you add this build step before the MSBuild step. Here is what this step looks like in VSTS:

The Script Filename points to the location of the Set-ProjectFilesClickOnceVersion.ps1 script in my repository. Also note that I set the Working Folder to the directory that contains the .csproj project file. This is necessary since we are dealing with relative paths, not absolute ones.

For the Arguments I provide the name of the project file that corresponds to my ClickOnce application, as well as the VSTS Build ID. The Build ID is a globally unique ID that gets auto-incremented with every build, and most (all?) build systems have this. The script will translate this Build ID into the Build and Revision version number parts to use for the ClickOnce version, to ensure that they are always increasing in value. So you should never need to manually modify the ClickOnce version, and every build should generate a version number greater than the last, allowing the ClickOnce application to properly detect when a newer version has been published.

Maybe you update your application’s version number on every build, and want your ClickOnce version to match that version. If this is the case, then use the –Version parameter instead of –BuildSystemsBuildId. The other alternative is to use the –IncrementProjectFilesRevision switch to simply increment the ClickOnce Revision that is already stored in the project file. If you use this switch, you need to be sure to check the project file back into source control so that the Revision is properly incremented again on subsequent builds. I like to avoid my build system checking files into source control wherever possible, so I have opted for the BuildSystemsBuildId parameter here.

The last argument I have included is the UpdateMinimumRequiredVersionToCurrentVersion parameter, and it is optional. If provided, this parameter will change the ClickOnce application’s Minimum Required Version to be the same as the new version, forcing the application to always update to the latest version when it detects that an update is available. I typically use this setting for all of my ClickOnce applications. If you still plan on publishing your ClickOnce applications from Visual Studio, but want this functionality, simply install the AutoUpdateProjectsMinimumRequiredClickOnceVersion NuGet Package.

That’s it! Now if you launch another build it should create new artifacts with a larger ClickOnce version number, and once published your application should update to the new version.

 

Bonus – Displaying the ClickOnce version in your application

The ClickOnce application version number may be different than your assembly version number, or perhaps you don’t update your product’s version number on every publish; it’s easy to forget. I typically like to use semantic versioning for my projects, which only involves the first 3 version parts. This leaves the last version part, the Revision number, available for me to set as I please. I will typically use the ClickOnce Revision in my application’s displayed version number. This makes it easy to tell which actual release a client has installed in the event that the product version was not updated between releases. The code to do it is fairly simple.

public Version ApplicationVersion = new Version(“1.11.2”);
private void MainWindow_Loaded(object sender, RoutedEventArgs e)
{
// If this is a ClickOnce deployment, append the ClickOnce Revision to the version number (as it gets updated with each publish).
if (ApplicationDeployment.IsNetworkDeployed)
ApplicationVersion = new Version(ApplicationVersion.Major, ApplicationVersion.Minor, ApplicationVersion.Build,
ApplicationDeployment.CurrentDeployment.CurrentVersion.Revision);
// Display the Version as part of the window title.
wndMainWindow.Title += ApplicationVersion;
}

Here I hard-code the product version that I want displayed in my application in a variable called ApplicationVersion. When the WPF application is launched, I obtain the ClickOnce Revision and append it onto the end of the version number. I then display the version in my application’s window title, but you might want to show it somewhere else, such as in an About window. If you want, you could even display both the full application version and full ClickOnce version.