Simple unit tests in Visual Studio solutions

In Visual Studio it is really simple to add unit tests that are automatically run in your build script. Right click your solution to add a test project:

Add a few test methods:

Check this Step-By-Step for an explanation:

https://visualstudiomagazine.com/articles/2013/07/29/test-driven-development-with-visual-studio-2012.aspx

Check this link for some Unit test basics:

https://docs.microsoft.com/en-us/visualstudio/test/unit-test-basics?view=vs-2017

/// <summary>
/// Test if flipcoins throws as many times as requested
/// </summary>
[TestMethod()] public void FlipCoinsTest1Times()
{
//Arrange
int odd, even;
int times = 1;

//Act
CoinFlipper coinFlipper = new CoinFlipper();
coinFlipper.FlipCoins(times, out odd, out even);

//Assert Assert.IsTrue(times == odd + even);
}

Open Test explorer window

Right click a selection of tests and select “Run selected tests”.

When the tests are OK, we can check in the solution. Default the Build in Azure will run libraries with names containing *test*.

The results for this solution: 8 tests passed.

Hosted Build has errors for missing reference Crystal reports libraries

I was planning a migration for my solution from a build on my local machine to a build in Azure Devops. I want to use the Hosted VS2017 because then I do not have to worry about maintaining local Build servers.

When I added the solution to Azure and set up a build pipeline I encountered the following errors in the MSBuild log:

2019-01-10T10:11:49.9382855Z ##[error]Notepad\CrystalReportsViewer.cs(8,7): Error CS0246: The type or namespace name ‘CrystalDecisions’ could not be found (are you missing a using directive or an assembly reference?)
2019-01-10T10:11:49.9412999Z ##[error]Notepad\CrystalReport1.cs(153,53): Error CS0246: The type or namespace name ‘RequestContext’ could not be found (are you missing a using directive or an assembly reference?)
2019-01-10T10:11:49.9414407Z ##[error]Notepad\CrystalReportsViewer.cs(14,16): Error CS0246: The type or namespace name ‘ReportDocument’ could not be found (are you missing a using directive or an assembly reference?)
2019-01-10T10:11:49.9415960Z ##[error]Notepad\CrystalReport1.cs(19,35): Error CS0246: The type or namespace name ‘ReportClass’ could not be found (are you missing a using directive or an assembly reference?)
2019-01-10T10:11:49.9430403Z ##[error]Notepad\CrystalReport1.cs(24,32): Error CS0115: ‘CrystalReport1.ResourceName’: no suitable method found to override
2019-01-10T10:11:49.9432260Z ##[error]Notepad\CrystalReport1.cs(33,30): Error CS0115: ‘CrystalReport1.NewGenerator’: no suitable method found to override
2019-01-10T10:11:49.9433304Z ##[error]Notepad\CrystalReport1.cs(42,32): Error CS0115: ‘CrystalReport1.FullResourceName’: no suitable method found to override

I found a solution for this issue adding a pre-build event:

First I needed to add the CRRuntime msi and a pre-build.bat file to my solution:

The content for the pre-build file is an administrator installation of the CRRuntime msi. The command is:
msiexec /a “%1CRRuntime_64bit_13_0_23.msi” /quiet /norestart /log “%1CRRuntime_64bit_13_0_23_install.log”. I only want this to be installed when building a release (in Azure). For this I added the condition to only install for release builds.

if %2 == “release” msiexec /a “%1CRRuntime_64bit_13_0_23.msi” /quiet /norestart /log “%1CRRuntime_64bit_13_0_23_install.log”

Last I have added a pre-build event command line for the solution:
“$(ProjectDir)pre-build.bat” “$(ProjectDir)” “$(ConfigurationName)”

That’s that. Now the host will install the Crystal reports run time before building the solution.

How to set up a local deployment for an Azure build application

*Important note: This solution will only work when you do NOT have a .gitignore file in your repository*

Configure a local agent

First requirement is that you set up a local agent that will be used for the local tasks.

How to configure local build and deploy agents is explained here:

https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=vsts

The result should look somewhat like this:

To control this agent you can choose to install it as a service on Windows.

Or you can choose to run the agent from the command line. To start and stop the agent I added two scripts:

Start.cmd:

Start.cmd:
cd c:
cd \EK-VSTS-Agent
start “EK-VSTS Azure agent” .\run.cmd
exit

Stop.cmd:

taskkill /FI “WindowTitle eq EK-VSTS Azure agent*” /T /F

Setup a build and release pipeline in Azure

Goto Pipelines in your Azure Devops project and click on new pipeline. My example uses a project named WPFDatasetWithSQL.

*Important note: This solution will only work when you do NOT have a .gitignore file in your repository*

Click continue and choose .Net Desktop and click Apply.

If you want to build the solution using a hosted machine keep the “Agent pool” set on “Hosted VS2017”. If you need local components to build you could choose to use a local machine or set up the required components in this build script.

For this example I have no need for extra components and I will keep the Agent pool on Hosted VS2017.

We are going to change a few setps in this script:

1 Set the MSBuild Arguments to /target:publish. This changes the MSBuild to add a app.publish to the build directory for click once deployment.

2 Change the step Copy Files to add the app.publish folder to the artifacts folder.
Display name = Copy Files to: $(build.artifactstagingdirectory)
Source Folder = $(Build.SourcesDirectory)\src\BLM\bin\$(BuildConfiguration)\app.publish
Contents = **\**

3 Change the artifact name.
Display name = Publish Artifact: $(System.TeamProject)-$(Build.BuildNumber)
Artifact name = $(System.TeamProject)-$(Build.BuildNumber)

Click Save and keep the default name.

Set up a release pipeline

Now we will set up a release pipeline in which we can control and manage releases for this application.

Click on Releases in the menu and click New pipeline.

Choose a Empty job template. The release pipeline is going to contain not much more than a few copy tasks.

For starters we will have to choose an artifact. Choice is simple, we are going to use the artifacts from the build pipeline. Select the Source Build pipeline set up in the previous step and finish this by clicking the Add button below.

Next step in this release pipeline is a deployment to “Test”. For this purpose we will rename the default “Stage 1” to “Test”. For this, clicking the Stage1 image (not on the link to job with task) will open a properties window. Rename Stage1 to Test and click save right top in the corner.

Now click the link to job and task in the Test stage. Click the agent job and change the agent pool to the pool where you added the local agent. In my example I added the local agent to a pool named “local machine”.

Now we will add a job to copy the publish folder to a local directory. Click on the puls sign next to “Agent job” and search for “Copy Files”

Select The task added below Job agent and fill in the details:

Select The task added below Job agent and fill in the details:
Display name = Copy Files to: c:\drop\$(System.TeamProject)\$(Release.EnvironmentName)\$(Release.ReleaseName)\
Source Folder = $(system.defaultworkingdirectory)_WPFDatasetWithSQL-.NET Desktop-CI * This last directory name is the build pipeline name
Target Folder = c:\drop\$(System.TeamProject)\$(Release.EnvironmentName)\$(Release.ReleaseName)\

The source folder will contain the pipeline name for the build pipeline preceded by an underscore:

Click save in top right hand corner.

Now we are going to add the production stage and the required copy jobs for this stage.

Click on releases in the left menu and click edit.

Click “Clone” in Test stage. And rename this new stage “Copy of Test” to “Production”. Click the task details and here I added System.TeamProject to the source folder name. This removes the build number from the destination name.

Next click the plus sign for the “Agent job” to add a command line script. With this command line we will first clean the install folder before we copy the new release in that location. The command line script is rd /S /Q c:\drop\$(System.TeamProject)\Install\

Last task for this job is to add a second “Copy Files” task. This task will copy the publish content in the install folder.

For the first run disable the Command line script because the folder will not yet exist. This will cause an error if the command is executed while the directory does not exist. After the first run the command can be enabled.

Last option is to add an approval trigger on production. A test manager or a group of testers can be allowed to approve the release after testing.

Another nice feature is to enable continuous integration and continuous deployment in Azure. For this go to the build pipeline and click the checkbox for “Enable continuous integration” in the tab “Triggers”.

Second, go to release pipeline click the continuous deployment trigger and enable continuous deployment every time a new build is available. Click save.

First two times the deployment failed. I checked the logging and fixed some typing errors.

After approving the release the install folder will be updated with the required binaries.

All done. Enjoy.

Unblock project files in Visual Studio

When you open a copy of project files from a remote location / downloaded from the internet you get the message that the project location is not trusted:

or a security warning when opening the project warning that the project file may have come from a location that is not fully trusted:

First assure yourself that the files you downloaded are from a trustworthy source. If this is the case then you could remove the warnings from these files like this:

Unblock downloaded files in folder including subfolders:

gci -path “C:\BeeHive” -recurse | Unblock-File (replace “C:\BeeHive” with the folder in which you want to unblock all files)

Async Await usage examples

//Wait Async use:

private async void button1_Click(object sender, EventArgs e)
{
// Call the method that runs asynchronously.
string result = await WaitAsynchronouslyAsync();

// Call the method that runs synchronously.
//string result = await WaitSynchronously ();

// Display the result.
textBox1.Text += result;
}

// The following method runs asynchronously. The UI thread is not
// blocked during the delay. You can move or resize the Form1 window
// while Task.Delay is running.
public async Task<string> WaitAsynchronouslyAsync()
{
await Task.Delay(10000);
return “Finished”;
}

// The following method runs synchronously, despite the use of async.
// You cannot move or resize the Form1 window while Thread.Sleep
// is running because the UI thread is blocked.
public async Task<string> WaitSynchronously()
{
// Add a using directive for System.Threading.
Thread.Sleep(10000);
return “Finished”;
}

using System;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;

public class Example
{
public static void Main()
{
string[] args = Environment.GetCommandLineArgs();
if (args.Length > 1)
GetPageSizeAsync(args[1]).Wait();
else
Console.WriteLine(“Enter at least one URL on the command line.”);
}

private static async Task GetPageSizeAsync(string url)
{
var client = new HttpClient();
var uri = new Uri(Uri.EscapeUriString(url));
byte[] urlContents = await client.GetByteArrayAsync(uri);
Console.WriteLine($”{url}: {urlContents.Length/2:N0} characters”);
}
}
// The following call from the command line:
// await1 http://docs.microsoft.com
// displays output like the following:
// http://docs.microsoft.com: 7,967 characters

//In a separate class:

using System;
using System.Linq;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;

class Example
{
static async Task Main()
{
string[] args = Environment.GetCommandLineArgs();
if (args.Length < 2)
throw new ArgumentNullException(“No URIs specified on the command line.”);

// Don’t pass the executable file name
var uris = args.Skip(1).ToArray();

long characters = await GetPageLengthsAsync(uris);
Console.WriteLine($”{uris.Length} pages, {characters:N0} characters”);
}

private static async Task<long> GetPageLengthsAsync(string[] uris)
{
var client = new HttpClient();
long pageLengths = 0;

foreach (var uri in uris)
{
var escapedUri = new Uri(Uri.EscapeUriString(uri));
string pageContents = await client.GetStringAsync(escapedUri);
Interlocked.Add(ref pageLengths, pageContents.Length);
}

return pageLengths;
}
}

A successful Git branching model

By Vincent Driessen
on Tuesday, January 05, 2010

Check out my latest post: An Intro to Decoders

In this post I present the development model that I’ve introduced for some of my projects (both at work and private) about a year ago, and which has turned out to be very successful. I’ve been meaning to write about it for a while now, but I’ve never really found the time to do so thoroughly, until now. I won’t talk about any of the projects’ details, merely about the branching strategy and release management.

Why git?

For a thorough discussion on the pros and cons of Git compared to centralized source code control systems, see the web. There are plenty of flame wars going on there. As a developer, I prefer Git above all other tools around today. Git really changed the way developers think of merging and branching. From the classic CVS/Subversion world I came from, merging/branching has always been considered a bit scary (“beware of merge conflicts, they bite you!”) and something you only do every once in a while.

But with Git, these actions are extremely cheap and simple, and they are considered one of the core parts of your daily workflow, really. For example, in CVS/Subversion books, branching and merging is first discussed in the later chapters (for advanced users), while in every Git book, it’s already covered in chapter 3 (basics).

As a consequence of its simplicity and repetitive nature, branching and merging are no longer something to be afraid of. Version control tools are supposed to assist in branching/merging more than anything else.

Enough about the tools, let’s head onto the development model. The model that I’m going to present here is essentially no more than a set of procedures that every team member has to follow in order to come to a managed software development process.

Decentralized but centralized

The repository setup that we use and that works well with this branching model, is that with a central “truth” repo. Note that this repo is only considered to be the central one (since Git is a DVCS, there is no such thing as a central repo at a technical level). We will refer to this repo as origin, since this name is familiar to all Git users.

Each developer pulls and pushes to origin. But besides the centralized push-pull relationships, each developer may also pull changes from other peers to form sub teams. For example, this might be useful to work together with two or more developers on a big new feature, before pushing the work in progress to origin prematurely. In the figure above, there are subteams of Alice and Bob, Alice and David, and Clair and David.

Technically, this means nothing more than that Alice has defined a Git remote, named bob, pointing to Bob’s repository, and vice versa.

The main branches

At the core, the development model is greatly inspired by existing models out there. The central repo holds two main branches with an infinite lifetime:

  • master
  • develop

The master branch at origin should be familiar to every Git user. Parallel to the master branch, another branch exists called develop.

We consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.

We consider origin/develop to be the main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.

When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number. How this is done in detail will be discussed further on.

Therefore, each time when changes are merged back into master, this is a new production release by definition. We tend to be very strict at this, so that theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers everytime there was a commit on master.

Supporting branches

Next to the main branches master and develop, our development model uses a variety of supporting branches to aid parallel development between team members, ease tracking of features, prepare for production releases and to assist in quickly fixing live production problems. Unlike the main branches, these branches always have a limited life time, since they will be removed eventually.

The different types of branches we may use are:

  • Feature branches
  • Release branches
  • Hotfix branches

Each of these branches have a specific purpose and are bound to strict rules as to which branches may be their originating branch and which branches must be their merge targets. We will walk through them in a minute.

By no means are these branches “special” from a technical perspective. The branch types are categorized by how we use them. They are of course plain old Git branches.

Feature branches

May branch off from:
develop
Must merge back into:
develop
Branch naming convention:
anything except master, develop, release-*, or hotfix-*

Feature branches (or sometimes called topic branches) are used to develop new features for the upcoming or a distant future release. When starting development of a feature, the target release in which this feature will be incorporated may well be unknown at that point. The essence of a feature branch is that it exists as long as the feature is in development, but will eventually be merged back into develop (to definitely add the new feature to the upcoming release) or discarded (in case of a disappointing experiment).

Feature branches typically exist in developer repos only, not in origin.

Creating a feature branch

When starting work on a new feature, branch off from the develop branch.

$ git checkout -b myfeature develop
Switched to a new branch "myfeature"

Incorporating a finished feature on develop

Finished features may be merged into the develop branch to definitely add them to the upcoming release:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff myfeature
Updating ea1b82a..05e9557
(Summary of changes)
$ git branch -d myfeature
Deleted branch myfeature (was 05e9557).
$ git push origin develop

The --no-ff flag causes the merge to always create a new commit object, even if the merge could be performed with a fast-forward. This avoids losing information about the historical existence of a feature branch and groups together all commits that together added the feature. Compare:

In the latter case, it is impossible to see from the Git history which of the commit objects together have implemented a feature—you would have to manually read all the log messages. Reverting a whole feature (i.e. a group of commits), is a true headache in the latter situation, whereas it is easily done if the --no-ff flag was used.

Yes, it will create a few more (empty) commit objects, but the gain is much bigger than the cost.

Release branches

May branch off from:
develop
Must merge back into:
develop and master
Branch naming convention:
release-*

Release branches support preparation of a new production release. They allow for last-minute dotting of i’s and crossing t’s. Furthermore, they allow for minor bug fixes and preparing meta-data for a release (version number, build dates, etc.). By doing all of this work on a release branch, the develop branch is cleared to receive features for the next big release.

The key moment to branch off a new release branch from develop is when develop (almost) reflects the desired state of the new release. At least all features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off.

It is exactly at the start of a release branch that the upcoming release gets assigned a version number—not any earlier. Up until that moment, the develop branch reflected changes for the “next release”, but it is unclear whether that “next release” will eventually become 0.3 or 1.0, until the release branch is started. That decision is made on the start of the release branch and is carried out by the project’s rules on version number bumping.

Creating a release branch

Release branches are created from the develop branch. For example, say version 1.1.5 is the current production release and we have a big release coming up. The state of develop is ready for the “next release” and we have decided that this will become version 1.2 (rather than 1.1.6 or 2.0). So we branch off and give the release branch a name reflecting the new version number:

$ git checkout -b release-1.2 develop
Switched to a new branch "release-1.2"
$ ./bump-version.sh 1.2
Files modified successfully, version bumped to 1.2.
$ git commit -a -m "Bumped version number to 1.2"
[release-1.2 74d9424] Bumped version number to 1.2
1 files changed, 1 insertions(+), 1 deletions(-)

After creating a new branch and switching to it, we bump the version number. Here, bump-version.sh is a fictional shell script that changes some files in the working copy to reflect the new version. (This can of course be a manual change—the point being that some files change.) Then, the bumped version number is committed.

This new branch may exist there for a while, until the release may be rolled out definitely. During that time, bug fixes may be applied in this branch (rather than on the develop branch). Adding large new features here is strictly prohibited. They must be merged into develop, and therefore, wait for the next big release.

Finishing a release branch

When the state of the release branch is ready to become a real release, some actions need to be carried out. First, the release branch is merged into master (since every commit on master is a new release by definition, remember). Next, that commit on master must be tagged for easy future reference to this historical version. Finally, the changes made on the release branch need to be merged back into develop, so that future releases also contain these bug fixes.

The first two steps in Git:

$ git checkout master
Switched to branch 'master'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2

The release is now done, and tagged for future reference.

Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.

To keep the changes made in the release branch, we need to merge those back into develop, though. In Git:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)

This step may well lead to a merge conflict (probably even, since we have changed the version number). If so, fix it and commit.

Now we are really done and the release branch may be removed, since we don’t need it anymore:

$ git branch -d release-1.2
Deleted branch release-1.2 (was ff452fe).

Hotfix branches

May branch off from:
master
Must merge back into:
develop and master
Branch naming convention:
hotfix-*

Hotfix branches are very much like release branches in that they are also meant to prepare for a new production release, albeit unplanned. They arise from the necessity to act immediately upon an undesired state of a live production version. When a critical bug in a production version must be resolved immediately, a hotfix branch may be branched off from the corresponding tag on the master branch that marks the production version.

The essence is that work of team members (on the develop branch) can continue, while another person is preparing a quick production fix.

Creating the hotfix branch

Hotfix branches are created from the master branch. For example, say version 1.2 is the current production release running live and causing troubles due to a severe bug. But changes on develop are yet unstable. We may then branch off a hotfix branch and start fixing the problem:

$ git checkout -b hotfix-1.2.1 master
Switched to a new branch "hotfix-1.2.1"
$ ./bump-version.sh 1.2.1
Files modified successfully, version bumped to 1.2.1.
$ git commit -a -m "Bumped version number to 1.2.1"
[hotfix-1.2.1 41e61bb] Bumped version number to 1.2.1
1 files changed, 1 insertions(+), 1 deletions(-)

Don’t forget to bump the version number after branching off!

Then, fix the bug and commit the fix in one or more separate commits.

$ git commit -m "Fixed severe production problem"
[hotfix-1.2.1 abbe5d6] Fixed severe production problem
5 files changed, 32 insertions(+), 17 deletions(-)

Finishing a hotfix branch

When finished, the bugfix needs to be merged back into master, but also needs to be merged back into develop, in order to safeguard that the bugfix is included in the next release as well. This is completely similar to how release branches are finished.

First, update master and tag the release.

$ git checkout master
Switched to branch 'master'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2.1

Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.

Next, include the bugfix in develop, too:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)

The one exception to the rule here is that, when a release branch currently exists, the hotfix changes need to be merged into that release branch, instead of develop. Back-merging the bugfix into the release branch will eventually result in the bugfix being merged into develop too, when the release branch is finished. (If work in develop immediately requires this bugfix and cannot wait for the release branch to be finished, you may safely merge the bugfix into develop now already as well.)

Finally, remove the temporary branch:

$ git branch -d hotfix-1.2.1
Deleted branch hotfix-1.2.1 (was abbe5d6).

Summary

While there is nothing really shocking new to this branching model, the “big picture” figure that this post began with has turned out to be tremendously useful in our projects. It forms an elegant mental model that is easy to comprehend and allows team members to develop a shared understanding of the branching and releasing processes.

A high-quality PDF version of the figure is provided here. Go ahead and hang it on the wall for quick reference at any time.

Update: And for anyone who requested it: here’s the gitflow-model.src.key of the main diagram image (Apple Keynote).


Git-branching-model.pdf

Git-branching-model

If you want to get in touch, I’m @nvie on Twitter.

How to calculate Pi

How to Calculate Pi, π, Simpson’s Rule
Nearly everyone knows that Pi or π is a peculiar number that is a little more than three. Most of those people know that the number has importance because it is the ratio of the distance around the perimeter (circumference) of a circle when compared to that circle’s diameter. It is what is called an irrational number, where no fraction can be written that equals it, and in fact, the decimal numbers needed to precisely define Pi would go on forever. So how is that number determined?
We will present two different methods of calculating it here. The first could be called the hard way, and it is the approach that ancient Greeks used 2300 years ago to first determine a value for Pi. The second is a much more sophisticated approach which relies both on Calculus and on something called Simpson’s Rule, to arrive at a far more precise value rather simply and easily.

Verder:

The Original Way of Determining Pi
First, let us define a circle that has what is called Unity diameter. That is, it has a Radius = 0.5 and a Diameter = 1.00.
We could INSCRIBE an equilateral triangle inside the circle, where the points/corners just touch the circle. We could carefully measure the length of the sides of that triangle, and would find that they are each slightly over 0.866 units long. With the triangle having three sides, the total perimeter of the triangle is therefore about 2.6 units. We can see that the distance around the circle is greater than this, in other words, Pi must be greater than 2.6. In the same way, we could draw an equilateral triangle which is larger, where the midpoints of the sides each exactly touch the circle, and we can measure the length of those sides to be around 1.732 units. Again, with three sides, we have a total perimeter of this triangle to be around 5.2 units, so we know the distance around the circle must be less than 5.2.
Now, if we do the same thing using squares instead, the larger number of sides more closely follows the shape of the circle and we get better results, indicating that Pi must be between 2.83 and 4.00. If we use five-sided pentagons instead, the result is better yet, Pi being between 2.94 and 3.63. By using six-sided hexagons, Pi is shown to be between 3.00 and 3.46.
For the ancient Greeks, this proceeded fairly well, but it took a lot of time and effort and it required really accurate measurements of the lengths of the sides of the regular polygons, and also really accurate drawings of those polygons so that they truly were Regular (all equal sided). However, the process was continued (skipping many numbers of sides) up to 120 sides. If you think about it, a 120-sided inscribed polygon would clearly very closely resemble the shape of the circle, and would therefore closely indicate the value of Pi. In fact, by using 120-sided polygons, we can determine that Pi must be between 3.1412 and 3.1423, decently close to the 3.1416 that we all know. In fact, if you average the two values (based on lower limit and upper limit) you get 3.1418. a value that is quite close!
However, that value is not close enough for modern Engineering requirements! Which is why the advanced approach presented below is now considered far better. However, here is a chart of the (measured and calculated) values for various numbers of sides for the polygons. Note that if we do this for 2000-sided polygons, the value becomes quite close. However, the process of doing that measurement is extremely difficult for such short polygon sides, when the measured dimension must be to an accuracy of better than one-part-in-a-million!
Number
of sides inside
one side inside
total outside
one side outside
total average
in/out
3 0.866025 2.598076 1.732051 5.196152 3.897114
4 0.707107 2.828427 1.000000 4.000000 3.414214
5 0.587785 2.938926 0.726543 3.632713 3.285820
6 0.500000 3.000000 0.577350 3.464102 3.232051
7 0.433884 3.037186 0.481575 3.371022 3.204104
8 0.382683 3.061467 0.414214 3.313709 3.187588
9 0.342020 3.078181 0.363970 3.275732 3.176957
10 0.309017 3.090170 0.324920 3.249197 3.169683
11 0.281733 3.099058 0.293626 3.229891 3.164475
12 0.258819 3.105829 0.267949 3.215390 3.160609
13 0.239316 3.111104 0.246478 3.204212 3.157658
14 0.222521 3.115293 0.228243 3.195409 3.155351
15 0.207912 3.118675 0.212557 3.188348 3.153512
16 0.195090 3.121445 0.198912 3.182598 3.152021
17 0.183750 3.123742 0.186932 3.177851 3.150796
18 0.173648 3.125667 0.176327 3.173886 3.149776
19 0.164595 3.127297 0.166870 3.170539 3.148918
20 0.156434 3.128689 0.158384 3.167689 3.148189
120 0.026177 3.141234 0.026186 3.142311 3.141772
480 0.006545 3.141570 0.006545 3.141637 3.141604
2000 0.001571 3.141591 0.001571 3.141595 3.141593

The Improved Way of Determining Pi
This will be kept fairly painless! Actually, you do not need to know any Calculus, or even anything beyond simple adding, multiplying and dividing to do this! These next few paragraphs just explain the basis of why this works, which is because of some results in Calculus.
We first need to note that the definition of Pi is the diameter times Pi giving the circumference of any circle. That means that the circumference is equal to 2 * Pi, so half a circle or 180 degrees equals Pi (usually said to be Pi radians).
It turns out to be fairly easily provable in Calculus that the Derivative of the Inverse Tangent (a trigonometry term) is equal to 1/(1 + X2). Since both the Tangent and its Derivative are continuous functions (except at specific points, which we will avoid), that means that the ANTI-Derivative of 1/(1 + X2) is the Inverse Tangent. For a continuous function, the Anti-Derivative is the same as the Integral, so this means that the Integral of 1/(1 + X2) is equal to the Inverse Tangent (over a given interval of angles).
We can select a specific range of angles, and for simplicity we select from zero to the angle which has a tangent of exactly 1, which is the angle that we often call 45 degrees. So if we can just evaluate that quantity 1/(1 + X2) over the range of X equals 0 to 1, and add it all up (as a Calculus Integral does), we would then have a result that equalled the difference which is just the angle between the two angles. In Trigonometry, the circumference of a circle is given as 2 * Pi * R, where Pi therefore represents 180 degrees. Therefore our 45 degree range is just Pi/4, exactly.
Therefore, by evaluating the Integral of our 1/(1 + X2) over the range of 0 to 1, we would get a result that was exactly equal to Pi/4. We’re getting somewhere!
There is a well proven rule in mathematics called Simpson’s Rule. It is actually an approximation, but a really good one, which is essentially based on the fact that if any three points of a curve are known, a unique parabolic curve can be drawn which passes through them, and so the simple quadratic formula for a parabola then gives a very good estimate for the curve in that section. In any case, Simpson’s Rule is fairly simple, when the points on the curve are equally spaced along a coordinate, and that there are an even number of intervals between those points. We will use the simple example of four intervals, or 5 data points here.
Our whole range is from 0 to 1, so our interval must be exactly 1/4, so we have values for X of 0, 1/4, 1/2, 3/4, and 1. We can easily calculate our 1/(1 + X2) for each of these values, to get 1, 16/17, 4/5, 16/25, and 1/2. Simpson’s Rule is actually very simple, where these various terms get multiplied by either 1, 2, or 4, and added together. I am going to make you find some textbook for the exact pattern, but it is really extremely simple. If presented here, it might distract from the central point! (The following tables do indicate those multipliers)
We can present our data here in a small table:
Number of divisions = 4
value of X Quantity
Calculated
1/(1 + X2) . Running Total
of multipled
Quantities
0.00 1.0000000 1 1.0000000
0.25 0.9411765 4 4.7647059
0.50 0.8000000 2 6.3647059
0.75 0.6400000 4 8.9247059
1.00 0.5000000 1 9.4247059
According to Simpson’s Rule we now need to divide this by 3 and multiply by the size of our intervals (1/4), in other words, in this case, dividing by 12. We then get a result of 0.7853921569.
This value is then equal to the number of radians in 45 degrees. To get the number of radians in 180 degrees, in other words, Pi, we just multiply by four. We then get 3.1415686275
Given how simple this was to do, easily done with pencil and paper, it is pretty impressive that we get a result that is surprisingly precise!
So now we decide to use six intervals instead of four!
Number of divisions = 6
value of X Quantity
Calculated
1/(1 + X2) . Running Total
of multipled
Quantities
0 1.0000000 1 1.0000000
1/6 0.9729730 4 4.8918919
1/3 0.9000000 2 6.6918919
1/2 0.8000000 4 9.8918919
2/3 0.6923077 2 11.2765073
5/6 0.5901639 4 13.6371630
1 0.5000000 1 14.1371630
We must now divide this by 3 and multiply by 1/6, or actually, divide by 18, to get: 0.7853979452
Multiplying this by four give 3.1415917809 as even a much better value for Pi.
It seems important to again note that this is a simple pencil and paper calculation that only involves simple addition, multiplication and division, and no exotic math stuff! Impressive, huh?
Well, you are free to invest maybe an hour in doing this calculation for 26 intervals. You will get a result of 0.7853981634 and then 3.1415926535 for the value of Pi, which is accurate to all ten of these decimal points!

So, just in case you had thought that the original ancient Greek approach was still used, with polygons having billions of sides, in determining the extremely accurate values for Pi (sometimes given to 100 or one million decimals), now you know how it is actually done! Setting up a computer to do these simple additions, multiplications and divisions is pretty easy, and the only limitation then is the accuracy of the values of the numbers used in the computer. If you use a computer system that has 40 significant digits, even YOU could now quickly calculate Pi to 40 decimals!

Yet another modern way!
It turns out that there is a way to do this without having to rely on Simpson’s Rule. There is another Calculus proof that the quantity the Integral of 1/(1 + X2), or the Inverse Tangent of X, we used above can be expressed as an alternating infinite series, as:
X – X3/3 + X5/5 – X7/7 + X9/9 …
Since our values of X are zero and one, this simplifies. When X = 0 it is obvious that this is a sum of a lot of zeroes, or zero. For X = 1, this is just:
1 – 1/3 + 1/5 – 1/7 + 1/9 …
So the difference, the number we want that is Pi/4 is obviously just this last infinite series total. Sounds easy, huh? It is, except that you have to include a LOT of terms to get a very accurate value for Pi!
If you do this for a sum of the terms up to 1/41 (20 terms), you get a value for Pi of 3.0961615.
If you do this for a sum of the terms up to 1/4001 (2000 terms), you get a value for Pi of 3.1410931.
If you do this for a sum of the terms up to 1/400001 (200,000 terms), you get a value for Pi of 3.1415876.
If you do this for a sum of the terms up to 1/4000001 (2,000,000 terms), you get a value for Pi of 3.14159215.
If you do this for a sum of the terms up to 1/40000001 (20,000,000 terms), you get a value for Pi of 3.14159260.
That would be a LOT of additions and subtractions to get a value for Pi that still is not very impressive! We noted above that the actual value for Pi to ten decimals is 3.1415926535, so with this other method, our 20 million additions and subtractions still only get a precision to around 7 correct decimals. Not nearly as good as the Simpson’s Rule method above, even though it initially looks very attractive!
But we are showing that there are many ways to skin a cat! (figuratively speaking, of course!)

This presentation was first placed on the Internet in November 2006.
ImageThis page – – How to Calculate Pi, π, Simpson’s Rule – – is at http://mb-soft.com/public3/pi.html
This subject presentation was last updated on 01/12/2011 03:40:26

Link to the Index of these Public Service Pages
Image( http://mb-soft.com/index.html )

Useful SQL Server DBCC Commands

Handige link:
http://www.sql-server-performance.com/tips/dbcc_commands_p1.aspx
if the link should fail here is the content:

verder:
Useful SQL Server DBCC Commands
By : Brad McGehee

DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.Example:
DBCC CACHESTATS
Sample Results (abbreviated):
Object Name Hit Ratio
———— ————-
Proc 0.86420054765378507
Prepared 0.99988494930394334
Adhoc 0.93237136647793051
ReplProc 0.0
Trigger 0.99843452831887947
Cursor 0.42319205924058612
Exec Cxt 0.65279111666076906
View 0.95740334726893905
Default 0.60895011346896522
UsrTab 0.94985969576133511
SysTab 0.0
Check 0.67021276595744683
Rule 0.0
Summary 0.80056155581812771
Here’s what some of the key statistics from this command mean:
⦁ Hit Ratio: Displays the percentage of time that this particular object was found in SQL Server’s cache. The bigger this number, the better.
⦁ Object Count: Displays the total number of objects of the specified type that are cached.
⦁ Avg. Cost: A value used by SQL Server that measures how long it takes to compile a plan, along with the amount of memory needed by the plan. This value is used by SQL Server to determine if the plan should be cached or not.
⦁ Avg. Pages: Measures the total number of 8K pages used, on average, for cached objects.
⦁ LW Object Count, LW Avg Cost, WL Avg Stay, LW Ave Use: All these columns indicate how many of the specified objects have been removed from the cache by the Lazy Writer. The lower the figure, the better.
[7.0, 2000] Updated 9-1-2005*****DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.Example:
DBCC DROPCLEANBUFFERS
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC ERRORLOG: If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.Example:
DBCC ERRORLOG
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server. The database ID number to be affected must be entered as part of the command.You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = ‘database_name’)
DBCC FLUSHPROCINDB (@intDBID)
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn’t do a great job of logical defragmentation.The only way to truly reduce logical fragmentation is to rebuild your table’s indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.Example:
DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)
[2000] Updated 9-1-2005

DBCC FREEPROCCACHE: Used to clear out the stored procedure cache for all SQL Server databases. You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DBCC FREEPROCCACHE
[7.0, 2000, 2005] Updated 10-16-2005*****DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity. This is an undocumented command, and one that may be dropped in future versions of SQL Server.Example:
DBCC MEMORYSTATUS
[7.0, 2000] Updated 10-16-2005*****DBCC OPENTRAN: An open transaction can leave locks open, preventing others from accessing the data they need in a database. This command is used to identify the oldest open transaction in a specific database.Example:
DBCC OPENTRAN(‘database_name’)
[7.0, 2000] Updated 10-16-2005*****DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.Example:
DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])
where:Dbid or dbname: Enter either the dbid or the name of the database in question.Pagenum: Enter the page number of the SQL Server page that is to be examined.Print option: (Optional) Print option can be either 0, 1, or 2. 0 – (Default) This option causes DBCC PAGE to print out only the page header information. 1 – This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page’s offset table. Each of the rows printed out will be separated from each other. 2 – This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 – This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 – (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 – If the page is to be a virtual page number. 1 – (Default) If the page is the logical page number.

[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server’s performance would be increased because I/O could be reduced on the server.The process of “pinning a table” is a way to tell SQL Server that we don’t want it to flush out data pages for specific named tables once they are read into the cache in the first place. This in effect keeps these database pages in the data cache all the time, which eliminates the process of SQL Server from having to read the data pages, flush them out, and reread them again when the time arrives. As you can imagine, this can reduce I/O for these pinned tables, boosting SQL Server’s performance.To pin a table, the command DBCC PINTABLE is used. For example, the script below can be run to pin a table in SQL Server:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC PINTABLE (@db_id, @tbl_id)
While you can use the DBCC PINTABLE directly, without the rest of the above script, you will find the script handy because the DBCC PINTABLE’s parameters refer to the database and table ID that you want to pin, not by their database and table name. This script makes it a little easier to pin a table. You must run this command for every table you want to pin.Once a table is pinned in the data cache, this does not mean that the entire table is automatically loaded into the data cache. It only means that as data pages from that table are needed by SQL Server, they are loaded into the data cache, and then stay there, not ever being flushed out to disk until you give the command to unpin the table using the DBCC UNPINTABLE. It is possible that part of a table, and not all of it, will be all that is pinned.When you are done with a table and you no longer want it pinned, you will want to unpin your table. To do so, run this example code:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC UNPINTABLE (@db_id, @tbl_id)
[6.5, 7.0, 2000]Updated 10-16-2005

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.Example:
DBCC PROCCACHE
[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.Database reorganizations can be done byscheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).Unfortunately, the DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease.Example:
DBCC DBREINDEX(‘table_name’, fillfactor)
or
–Script to automatically reindex all tables in a database

USE DatabaseName –Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT “Reindexing ” + @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fillfactor of 90%. You can substitute any number you want for the 90 in the above script. When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don’t need access to the tables being reorganized. [7.0, 2000]Updated 10-16-2005*****DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don’t have to manually look up the table name ID number and the index ID number.Example:
DBCC SHOWCONTIG (Table_id, IndexID)
Or:
–Script to identify table fragmentation

–Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)

–Set the table and index to be examined
SELECT @IndexName = ‘index_name’ –enter name of index
SET @ID = OBJECT_ID(‘table_name’) –enter name of table

–Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName

–Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.Example:
DBCC SHOW_STATISTICS (table_name, index_name)
[7.0, 2000] Updated 3-20-2006

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.Example:
DBCC SQLMGRSTATS
Sample Results:
Item Status
————————- ———–
Memory Used (8k Pages) 5446
Number CSql Objects 29098
Number False Hits 425490
Here’s what the above means:
⦁ Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.
⦁ Number CSql Objects: Measures the total number of cached Transact-SQL statements.
⦁ Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.
[2000] Added 4-17-2003*****DBCC SQLPERF(): This command includes both documented and undocumented options. Let’s take a look at all of them and see what they do.
DBCC SQLPERF (LOGSPACE)
This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.
DBCC SQLPERF (UMSSTATS)
This option (undocumented) returns data about SQL Server thread management.
DBCC SQLPERF (WAITSTATS)
This option (undocumented) returns data about wait types for SQL Server resources.
DBCC SQLPERF (IOSTATS)
This option (undocumented) returns data about outstanding SQL Server reads and writes.
DBCC SQLPERF (RASTATS)
This option (undocumented) returns data about SQL Server read-ahead activity.
DBCC SQLPERF (THREADS)
This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread. [7.0, 2000] Updated 3-20-2006*****DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)Statistic Value
——————————– ————————
Scheduler ID 0.0
num users 18.0
num runnable 0.0
num workers 13.0
idle workers 11.0
work queued 0.0
cntxt switches 2.2994396E+7
cntxt switches(idle) 1.7793976E+7
Scheduler ID 1.0
num users 15.0
num runnable 0.0
num workers 13.0
idle workers 10.0
work queued 0.0
cntxt switches 2.4836728E+7
cntxt switches(idle) 1.6275707E+7
Scheduler ID 2.0
num users 17.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1331447E+7
cntxt switches(idle) 1.6273097E+7
Scheduler ID 3.0
num users 16.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1110251E+7
cntxt switches(idle) 1.624729E+7
Scheduler Switches 0.0
Total Work 3.1632352E+7Below is an explanation of some of the key statistics above:
⦁ num users: This is the number of SQL Server threads currently in the scheduler.
⦁ num runnable: This is the number of actual SQL Server threads that are runnable.
⦁ num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.
⦁ idle workers: The number of workers that are currently idle.
⦁ cntxt switches: The number of context switches between runnable threads.
⦁ cntxt switches (idle): The number of context switches to the idle thread.
[2000] Added 4-17-2003

DBCC TRACEON & DBCC TRACEOFF: Used to turn on and off trace flags. Trace flags are often used to turn on and off specific server behavior or server characteristics temporarily. In rare occasions, they can be useful to troubleshooting SQL Server performance problems.Example:To use the DBCC TRACEON command to turn on a specified trace flag, use this syntax:
DBCC TRACEON (trace# [,…n])
To use the DBCC TRACEON command to turn off a specified trace flag, use this syntax:
DBCC TRACEOFF (trace# [,…n])
You can also use the DBCC TRACESTATUS command to find out which trace flags are currently turned on in your server using this syntax:
DBCC TRACESTATUS (trace# [,…n])
For specific information on the different kinds of trace flags available, search this website or look them up in Books Online. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.Example:
DBCC UPDATEUSAGE (‘databasename’)
[7.0, 2000] Updated 3-20-2006 ImageImage

© 2000 – 2008 vDerivatives Limited All Rights Reserved. Image

Release artifacts and artifact sources

https://docs.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=vsts

18 minutes to read

Contributors

Alex Homer

Steve Danielson

https://github.com/elbatk

David Staheli

Note

Build and release pipelines are called definitions in TFS 2018 and in older versions. Service connections are called service endpoints in TFS 2018 and in older versions.

A release is a collection of artifacts in your DevOps CI/CD processes. An artifact is a deployable component of your application. Azure Pipelines can deploy artifacts that are produced by a wide range of artifact sources, and stored in different types of artifact repositories.

When authoring a release pipeline, you link the appropriate artifact sources to your release pipeline. For example, you might link an Azure Pipelines build pipeline or a Jenkins project to your release pipeline.

When creating a release, you specify the exact version of these artifact sources; for example, the number of a build coming from Azure Pipelines, or the version of a build coming from a Jenkins project.

After a release is created, you cannot change these versions. A release is fundamentally defined by the versioned artifacts that make up the release. As you deploy the release to various stages, you will be deploying and validating the same artifacts in all stages.

A single release pipeline can be linked to multiple artifact sources, of which one is the primary source. In this case, when you create a release, you specify individual versions for each of these sources.

Artifacts in a pipeline and release

Artifacts are central to a number of features in Azure Pipelines. Some of the features that depend on the linking of artifacts to a release pipeline are:

  • Auto-trigger releases. You can configure new releases to be automatically created whenever a new version of an artifact is produced. For more details, see Continuous deployment triggers. Note that the ability to automatically create releases is available for only some artifact sources.
  • Trigger conditions. You can configure a release to be created automatically, or the deployment of a release to a stage to be triggered automatically, when only specific conditions on the artifacts are met. For example, you can configure releases to be automatically created only when a new build is produced from a certain branch.
  • Artifact versions. You can configure a release to automatically use a specific version of the build artifacts, to always use the latest version, or to allow you to specify the version when the release is created.
  • Artifact variables. Every artifact that is part of a release has metadata associated with it, exposed to tasks through variables. This metadata includes the version number of the artifact, the branch of code from which the artifact was produced (in the case of build or source code artifacts), the pipeline that produced the artifact (in the case of build artifacts), and more. This information is accessible in the deployment tasks. For more details, see Artifact variables.
  • Work items and commits. The work items or commits that are part of a release are computed from the versions of artifacts. For example, each build in Azure Pipelines is associated with a set of work items and commits. The work items or commits in a release are computed as the union of all work items and commits of all builds between the current release and the previous release. Note that Azure Pipelines is currently able to compute work items and commits for only certain artifact sources.
  • Artifact download. Whenever a release is deployed to a stage, by default Azure Pipelines automatically downloads all the artifacts in that release to the agent where the deployment job runs. The procedure to download artifacts depends on the type of artifact. For example, Azure Pipelines artifacts are downloaded using an algorithm that downloads multiple files in parallel. Git artifacts are downloaded using Git library functionality. For more details, see Artifact download.

Artifact sources

There are several types of tools you might use in your application lifecycle process to produce or store artifacts. For example, you might use continuous integration systems such as Azure Pipelines, Jenkins, or TeamCity to produce artifacts. You might also use version control systems such as Git or TFVC to store your artifacts. Or you can use repositories such as Package Management in Visual Studio Team Services or a NuGet repository to store your artifacts. You can configure Azure Pipelines to deploy artifacts from all these sources.

By default, a release created from the release pipeline will use the latest version of the artifacts. At the time of linking an artifact source to a release pipeline, you can change this behavior by selecting one of the options to use the latest build from a specific branch by specifying the tags, a specific version, or allow the user to specify the version when the release is created from the pipeline.

Adding an artifact

If you link more than one set of artifacts, you can specify which is the primary (default).

Selecting a default version option

The following sections describe how to work with the different types of artifact sources.


Azure Pipelines

You can link a release pipeline to any of the build pipelines in Azure Pipelines or TFS project collection.

Note

You must include a Publish Artifacts task in your build pipeline. For XAML build pipelines, an artifact with the name drop is published implicitly.

Some of the differences in capabilities between different versions of TFS and Azure Pipelines are:

  • TFS 2015: You can link build pipelines only from the same project of your collection. You can link multiple definitions, but you cannot specify default versions. You can set up a continuous deployment trigger on only one of the definitions. When multiple build pipelines are linked, the latest builds of all the other definitions are used, along with the build that triggered the release creation.
  • TFS 2017 and newer and Azure Pipelines: You can link build pipelines from any of the projects in Azure Pipelines or TFS. You can link multiple build pipelines and specify default values for each of them. You can set up continuous deployment triggers on multiple build sources. When any of the builds completes, it will trigger the creation of a release.

The following features are available when using Azure Pipelines sources:

Feature Behavior with Azure Pipelines sources
Auto-trigger releases New releases can be created automatically when new builds (including XAML builds) are produced. See Continuous Deployment for details. You do not need to configure anything within the build pipeline. See the notes above for differences between version of TFS.
Artifact variables A number of artifact variables are supported for builds from Azure Pipelines.
Work items and commits Azure Pipelines integrates with work items in TFS and Azure Pipelines. These work items are also shown in the details of releases. Azure Pipelines integrates with a number of version control systems such as TFVC and Git, GitHub, Subversion, and external Git repositories. Azure Pipelines shows the commits only when the build is produced from source code in TFVC or Git.
Artifact download By default, build artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.
Deployment section in build The build summary includes a Deployment section, which lists all the stages to which the build was deployed.

TFVC, Git, and GitHub

There are scenarios in which you may want to consume artifacts stored in a version control system directly, without passing them through a build pipeline. For example:

  • You are developing a PHP or a JavaScript application that does not require an explicit build pipeline.
  • You manage configurations for various stages in different version control repositories, and you want to consume these configuration files directly from version control as part of the deployment pipeline.
  • You manage your infrastructure and configuration as code (such as Azure Resource Manager templates) and you want to manage these files in a version control repository.

Because you can configure multiple artifact sources in a single release pipeline, you can link both a build pipeline that produces the binaries of the application as well as a version control repository that stores the configuration files into the same pipeline, and use the two sets of artifacts together while deploying.

Azure Pipelines integrates with Team Foundation Version Control (TFVC) repositories, Git repositories, and GitHub repositories.

You can link a release pipeline to any of the Git or TFVC repositories in any of the projects in your collection (you will need read access to these repositories). No additional setup is required when deploying version control artifacts within the same collection.

When you link a Git or GitHub repository and select a branch, you can edit the default properties of the artifact types after the artifact has been saved. This is particularly useful in scenarios where the branch for the stable version of the artifact changes, and continuous delivery releases should use this branch to obtain newer versions of the artifact. You can also specify details of the checkout, such as whether check out submodules and LFS-tracked files, and the shallow fetch depth.

When you link a TFVC branch, you can specify the changeset to be deployed when creating a release.

The following features are available when using TFVC, Git, and GitHub sources:

Feature Behavior with TFVC, Git, and GitHub sources
Auto-trigger releases You can configure a continuous deployment trigger for pushes into the repository in a release pipeline. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for version control sources.
Work items and commits Azure Pipelines cannot show work items or commits associated with releases when using version control artifacts.
Artifact download By default, version control artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Jenkins

To consume Jenkins artifacts, you must create a service connection with credentials to connect to your Jenkins server. For more details, see service connections and Jenkins service connection. You can then link a Jenkins project to a release pipeline. The Jenkins project must be configured with a post build action to publish the artifacts.

The following features are available when using Jenkins sources:

Feature Behavior with Jenkins sources
Auto-trigger releases You can configure a continuous deployment trigger for pushes into the repository in a release pipeline. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for builds from Jenkins.
Work items and commits Azure Pipelines cannot show work items or commits for Jenkins builds.
Artifact download By default, Jenkins builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Artifacts generated by Jenkins builds are typically propagated to storage repositories for archiving and sharing. Azure blob storage is one of the supported repositories, allowing you to consume Jenkins projects that publish to Azure storage as artifact sources in a release pipeline. Deployments download the artifacts automatically from Azure to the agents. In this configuration, connectivity between the agent and the Jenkins server is not required. Microsoft-hosted agents can be used without exposing the server to internet.

Note

Azure Pipelines may not be able to contact your Jenkins server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with Jenkins by setting up an on-premises agent that can access the Jenkins server. You will not be able to see the name of your Jenkins projects when linking to a build, but you can type this into the link dialog field.

For more information about Jenkins integration capabilities, see Azure Pipelines Integration with Jenkins Jobs, Pipelines, and Artifacts.


Azure Container Registry, Docker, Kubernetes

When deploying containerized apps, the container image is first pushed to a container registry. After the push is complete, the container image can be deployed to the Web App for Containers service or a Docker/Kubernetes cluster. You must create a service connection with credentials to connect to your service to deploy images located there, or to Azure. For more details, see service connections.

The following features are available when using Azure Container Registry, Docker, Kubernetes sources:

Feature Behavior with Docker sources
Auto-trigger releases You can configure a continuous deployment trigger for images. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for builds.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

NuGet and npm packages from Package Management

To integrate with NuGet, or npm (Maven is not currently supported), you must first assign licenses for the Package Management extension from the Marketplace. For more information, see the Package Management Overview.

Scenarios where you may want to consume Package Management artifacts are:

  1. You have your application build (such as TFS, Azure Pipelines, TeamCity, Jenkins) published as a package (NuGet or npm) to Package Management and you want to consume the artifact in a release.
  2. As part of your application deployment, you need additional packages stored in Package Management.

When you link a Package Management artifact to your release pipeline, you must select the Feed, Package, and the Default version for the package. You can choose to pick up the latest version of the package, use a specific version, or select the version at the time of release creation. During deployment, the package is downloaded to the agent folder and the contents are extracted as part of the job execution.

The following features are available when using Package Management sources:

Feature Behavior with Package Management sources
Auto-trigger releases You can configure a continuous deployment trigger for packages. This can automatically trigger a release when a package is updated. See Triggers.
Artifact variables A number of artifact variables are supported for packages.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, packages are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

External or on-premises TFS

You can use Azure Pipelines to deploy artifacts published by an on-premises TFS server. You don’t need to make the TFS server visible on the Internet; you just set up an on-premises automation agent. Builds from an on-premises TFS server are downloaded directly into the on-premises agent, and then deployed to the specified target servers. They will not leave your enterprise network. This allows you to leverage all of your investments in your on-premises TFS server, and take advantage of the release capabilities in Azure Pipelines.

Using this mechanism, you can also deploy artifacts published in one Azure Pipelines subscription in another Azure Pipelines, or deploy artifacts published in one Team Foundation Server from another Team Foundation Server.

To enable these scenarios, you must install the TFS artifacts for Azure Pipelines extension from Visual Studio Marketplace. Then create a service connection with credentials to connect to your TFS server (see service connections for details).

You can then link a TFS build pipeline to your release pipeline. Choose External TFS Build in the Type list.

The following features are available when using external TFS sources:

Feature Behavior with external TFS sources
Auto-trigger releases You cannot configure a continuous deployment trigger for external TFS sources in a release pipeline. To automatically create a new release when a build is complete, you would need to add a script to your build pipeline in the external TFS server to invoke Azure Pipelines REST APIs and to create a new release.
Artifact variables A number of artifact variables are supported for external TFS sources.
Work items and commits Azure Pipelines cannot show work items or commits for external TFS sources.
Artifact download By default, External TFS artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Note

Azure Pipelines may not be able to contact an on-premises TFS server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with TFS by setting up an on-premises agent that can access the TFS server. You will not be able to see the name of your TFS projects or build pipelines when linking to a build, but you can type these into the link dialog fields. In addition, when you create a release, Azure Pipelines may not be able to query the TFS server for the build numbers. Instead, type the Build ID (not the build number) of the desired build into the appropriate field, or select the Latest build.


TeamCity

To integrate with TeamCity, you must first install the TeamCity artifacts for Azure Pipelines extension from Marketplace.

To consume TeamCity artifacts, start by creating a service connection with credentials to connect to your TeamCity server (see service connections for details).

You can then link a TeamCity build configuration to a release pipeline. The TeamCity build configuration must be configured with an action to publish the artifacts.

The following features are available when using TeamCity sources:

Feature Behavior with TeamCity sources
Auto-trigger releases You cannot configure a continuous deployment trigger for TeamCity sources in a release pipeline. To create a new release automatically when a build is complete, add a script to your TeamCity project that invokes the Azure Pipelines REST APIs to create a new release.
Artifact variables A number of artifact variables are supported for builds from TeamCity.
Work items and commits Azure Pipelines cannot show work items or commits for TeamCity builds.
Artifact download By default, TeamCity builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Note

Azure Pipelines may not be able to contact your TeamCity server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with TeamCity by setting up an on-premises agent that can access the TeamCity server. You will not be able to see the name of your TeamCity projects when linking to a build, but you can type this into the link dialog field.


Other sources

Your artifacts may be created and exposed by other types of sources such as a NuGet repository. While we continue to expand the types of artifact sources supported in Azure Pipelines, you can start using it without waiting for support for a specific source type. Simply skip the linking of artifact sources in a release pipeline, and add custom tasks to your stages that download the artifacts directly from your source.


Artifact download

When you deploy a release to a stage, the versioned artifacts from each of the sources are, by default, downloaded to the automation agent so that tasks running within that stage can deploy these artifacts. The artifacts downloaded to the agent are not deleted when a release is completed. However, when you initiate the next release, the downloaded artifacts are deleted and replaced with the new set of artifacts.

A new unique folder in the agent is created for every release pipeline when you initiate a release, and the artifacts are downloaded into that folder. The $(System.DefaultWorkingDirectory) variable maps to this folder.

Note that, at present, Azure Pipelines does not perform any optimization to avoid downloading the unchanged artifacts if the same release is deployed again. In addition, because the previously downloaded contents are always deleted when you initiate a new release, Azure Pipelines cannot perform incremental downloads to the agent.

You can, however, instruct Azure Pipelines to skip the automatic downloadof artifacts to the agent for a specific job and stage of the deployment if you wish. Typically, you will do this when the tasks in that job do not require any artifacts, or if you implement custom code in a task to download the artifacts you require.

In Azure Pipelines, you can, however, select which artifacts you want to download to the agent for a specific job and stage of the deployment. Typically, you will do this to improve the efficiency of the deployment pipeline when the tasks in that job do not require all or any of the artifacts, or if you implement custom code in a task to download the artifacts you require.

Selecting the artifacts to download

Artifact source alias

To ensure the uniqueness of every artifact download, each artifact source linked to a release pipeline is automatically provided with a specific download location known as the source alias. This location can be accessed through the variable:

$(System.DefaultWorkingDirectory)\[source alias]

This uniqueness also ensures that, if you later rename a linked artifact source in its original location (for example, rename a build pipeline in Azure Pipelines or a project in Jenkins), you don’t need to edit the task properties because the download location defined in the agent does not change.

The source alias is, by default, the name of the source selected when you linked the artifact source, prefixed with an underscore; depending on the type of the artifact source this will be the name of the build pipeline, job, project, or repository. You can edit the source alias from the artifacts tab of a release pipeline; for example, when you change the name of the build pipeline and you want to use a source alias that reflects the name of the build pipeline.

The source alias can contain only alphanumeric characters and underscores, and must start with a letter or an underscore

Primary source

When you link multiple artifact sources to a release pipeline, one of them is designated as the primary artifact source. The primary artifact source is used to set a number of pre-defined variables. It can also be used in naming releases.

Artifact variables

Azure Pipelines exposes a set of pre-defined variables that you can access and use in tasks and scripts; for example, when executing PowerShell scripts in deployment jobs. When there are multiple artifact sources linked to a release pipeline, you can access information about each of these. For a list of all pre-defined artifact variables, see variables.

Help and support

How to use an SQL query with parameters in Excel

1 Intro

2 Set up the query to run as a stored procedure

3 Prepare Excel

4 Make ODBC connection to database in SQL

5 Prepare “Microsoft query”

6 Link fields to query parameters

Simple example query:

declare @StartDate datetime set @StartDate = ‘2018-01-01’

declare @EndDate datetime set @EndDate = ‘2018-01-31’

select * from tblOrder where orderdate <= @StartDate and orderdate >= @EndDate

Create and run a script that creates a stored procedure:

CREATE PROCEDURE spSelectOrder

— Add the parameters for the stored procedure here

@StartDate As DateTime,

@EndDate As DateTime

AS

BEGIN

— SET NOCOUNT ON added to prevent extra result sets from

— interfering with SELECT statements.

SET NOCOUNT ON;

— Insert statements for procedure here

select * from tblOrder where orderdate <= @StartDate and

orderdate >= @EndDate

END

GO

This stored procedure can be run in Excel using Microsoft query. To run this query, prepare a worksheet with the parameters filled. These parameters will be used as input for the query later.

Next step is to add the data source to the worksheet. Start Data, “From Other Sources” “From Microsoft Query”. This will start a wizard to create a data connection:

1 Select or create a Data Source

2 The next step in the wizard is Choose Columns. Cancel the wizard on this screen and a question will pop up asking you if you want to continue editing this query. Click Yes.

3 MS Query will be started and a dialog to select tables will be presented. Close this dialog.

4 Click SQL button in button bar or in menu choose View, SQL.

5 Type “call” followed by the stored procedure name and question marks for the parameter input between parentheses. Place this in between curly bracets. These curly bracets are required to avoid syntax check errors.

{call spMassBalans (?, ?)}

6 Press OK and you will be prompted to input values for the required parameters.

The results will be retrieved in a query result window. After the result is presented go to File and click “Return Data to Microsoft Excel”.

Microsoft query will be closed and you will return to Excel with a dialog to import the data. Choose cell A4 in the work sheet.

Again you will be prompted to input values for the parameters. This time you are able to select the cells B1 and B2 as input. Check the checkbox in the dialog to use the reference for future use. If you want you can also check to refresh the data when the cell value changes.

If you want to manually refresh the data you can right-click anywhere in the datagrid and select “refresh”.