How to calculate Pi

How to Calculate Pi, π, Simpson’s Rule
Nearly everyone knows that Pi or π is a peculiar number that is a little more than three. Most of those people know that the number has importance because it is the ratio of the distance around the perimeter (circumference) of a circle when compared to that circle’s diameter. It is what is called an irrational number, where no fraction can be written that equals it, and in fact, the decimal numbers needed to precisely define Pi would go on forever. So how is that number determined?
We will present two different methods of calculating it here. The first could be called the hard way, and it is the approach that ancient Greeks used 2300 years ago to first determine a value for Pi. The second is a much more sophisticated approach which relies both on Calculus and on something called Simpson’s Rule, to arrive at a far more precise value rather simply and easily.

Verder:

The Original Way of Determining Pi
First, let us define a circle that has what is called Unity diameter. That is, it has a Radius = 0.5 and a Diameter = 1.00.
We could INSCRIBE an equilateral triangle inside the circle, where the points/corners just touch the circle. We could carefully measure the length of the sides of that triangle, and would find that they are each slightly over 0.866 units long. With the triangle having three sides, the total perimeter of the triangle is therefore about 2.6 units. We can see that the distance around the circle is greater than this, in other words, Pi must be greater than 2.6. In the same way, we could draw an equilateral triangle which is larger, where the midpoints of the sides each exactly touch the circle, and we can measure the length of those sides to be around 1.732 units. Again, with three sides, we have a total perimeter of this triangle to be around 5.2 units, so we know the distance around the circle must be less than 5.2.
Now, if we do the same thing using squares instead, the larger number of sides more closely follows the shape of the circle and we get better results, indicating that Pi must be between 2.83 and 4.00. If we use five-sided pentagons instead, the result is better yet, Pi being between 2.94 and 3.63. By using six-sided hexagons, Pi is shown to be between 3.00 and 3.46.
For the ancient Greeks, this proceeded fairly well, but it took a lot of time and effort and it required really accurate measurements of the lengths of the sides of the regular polygons, and also really accurate drawings of those polygons so that they truly were Regular (all equal sided). However, the process was continued (skipping many numbers of sides) up to 120 sides. If you think about it, a 120-sided inscribed polygon would clearly very closely resemble the shape of the circle, and would therefore closely indicate the value of Pi. In fact, by using 120-sided polygons, we can determine that Pi must be between 3.1412 and 3.1423, decently close to the 3.1416 that we all know. In fact, if you average the two values (based on lower limit and upper limit) you get 3.1418. a value that is quite close!
However, that value is not close enough for modern Engineering requirements! Which is why the advanced approach presented below is now considered far better. However, here is a chart of the (measured and calculated) values for various numbers of sides for the polygons. Note that if we do this for 2000-sided polygons, the value becomes quite close. However, the process of doing that measurement is extremely difficult for such short polygon sides, when the measured dimension must be to an accuracy of better than one-part-in-a-million!
Number
of sides inside
one side inside
total outside
one side outside
total average
in/out
3 0.866025 2.598076 1.732051 5.196152 3.897114
4 0.707107 2.828427 1.000000 4.000000 3.414214
5 0.587785 2.938926 0.726543 3.632713 3.285820
6 0.500000 3.000000 0.577350 3.464102 3.232051
7 0.433884 3.037186 0.481575 3.371022 3.204104
8 0.382683 3.061467 0.414214 3.313709 3.187588
9 0.342020 3.078181 0.363970 3.275732 3.176957
10 0.309017 3.090170 0.324920 3.249197 3.169683
11 0.281733 3.099058 0.293626 3.229891 3.164475
12 0.258819 3.105829 0.267949 3.215390 3.160609
13 0.239316 3.111104 0.246478 3.204212 3.157658
14 0.222521 3.115293 0.228243 3.195409 3.155351
15 0.207912 3.118675 0.212557 3.188348 3.153512
16 0.195090 3.121445 0.198912 3.182598 3.152021
17 0.183750 3.123742 0.186932 3.177851 3.150796
18 0.173648 3.125667 0.176327 3.173886 3.149776
19 0.164595 3.127297 0.166870 3.170539 3.148918
20 0.156434 3.128689 0.158384 3.167689 3.148189
120 0.026177 3.141234 0.026186 3.142311 3.141772
480 0.006545 3.141570 0.006545 3.141637 3.141604
2000 0.001571 3.141591 0.001571 3.141595 3.141593

The Improved Way of Determining Pi
This will be kept fairly painless! Actually, you do not need to know any Calculus, or even anything beyond simple adding, multiplying and dividing to do this! These next few paragraphs just explain the basis of why this works, which is because of some results in Calculus.
We first need to note that the definition of Pi is the diameter times Pi giving the circumference of any circle. That means that the circumference is equal to 2 * Pi, so half a circle or 180 degrees equals Pi (usually said to be Pi radians).
It turns out to be fairly easily provable in Calculus that the Derivative of the Inverse Tangent (a trigonometry term) is equal to 1/(1 + X2). Since both the Tangent and its Derivative are continuous functions (except at specific points, which we will avoid), that means that the ANTI-Derivative of 1/(1 + X2) is the Inverse Tangent. For a continuous function, the Anti-Derivative is the same as the Integral, so this means that the Integral of 1/(1 + X2) is equal to the Inverse Tangent (over a given interval of angles).
We can select a specific range of angles, and for simplicity we select from zero to the angle which has a tangent of exactly 1, which is the angle that we often call 45 degrees. So if we can just evaluate that quantity 1/(1 + X2) over the range of X equals 0 to 1, and add it all up (as a Calculus Integral does), we would then have a result that equalled the difference which is just the angle between the two angles. In Trigonometry, the circumference of a circle is given as 2 * Pi * R, where Pi therefore represents 180 degrees. Therefore our 45 degree range is just Pi/4, exactly.
Therefore, by evaluating the Integral of our 1/(1 + X2) over the range of 0 to 1, we would get a result that was exactly equal to Pi/4. We’re getting somewhere!
There is a well proven rule in mathematics called Simpson’s Rule. It is actually an approximation, but a really good one, which is essentially based on the fact that if any three points of a curve are known, a unique parabolic curve can be drawn which passes through them, and so the simple quadratic formula for a parabola then gives a very good estimate for the curve in that section. In any case, Simpson’s Rule is fairly simple, when the points on the curve are equally spaced along a coordinate, and that there are an even number of intervals between those points. We will use the simple example of four intervals, or 5 data points here.
Our whole range is from 0 to 1, so our interval must be exactly 1/4, so we have values for X of 0, 1/4, 1/2, 3/4, and 1. We can easily calculate our 1/(1 + X2) for each of these values, to get 1, 16/17, 4/5, 16/25, and 1/2. Simpson’s Rule is actually very simple, where these various terms get multiplied by either 1, 2, or 4, and added together. I am going to make you find some textbook for the exact pattern, but it is really extremely simple. If presented here, it might distract from the central point! (The following tables do indicate those multipliers)
We can present our data here in a small table:
Number of divisions = 4
value of X Quantity
Calculated
1/(1 + X2) . Running Total
of multipled
Quantities
0.00 1.0000000 1 1.0000000
0.25 0.9411765 4 4.7647059
0.50 0.8000000 2 6.3647059
0.75 0.6400000 4 8.9247059
1.00 0.5000000 1 9.4247059
According to Simpson’s Rule we now need to divide this by 3 and multiply by the size of our intervals (1/4), in other words, in this case, dividing by 12. We then get a result of 0.7853921569.
This value is then equal to the number of radians in 45 degrees. To get the number of radians in 180 degrees, in other words, Pi, we just multiply by four. We then get 3.1415686275
Given how simple this was to do, easily done with pencil and paper, it is pretty impressive that we get a result that is surprisingly precise!
So now we decide to use six intervals instead of four!
Number of divisions = 6
value of X Quantity
Calculated
1/(1 + X2) . Running Total
of multipled
Quantities
0 1.0000000 1 1.0000000
1/6 0.9729730 4 4.8918919
1/3 0.9000000 2 6.6918919
1/2 0.8000000 4 9.8918919
2/3 0.6923077 2 11.2765073
5/6 0.5901639 4 13.6371630
1 0.5000000 1 14.1371630
We must now divide this by 3 and multiply by 1/6, or actually, divide by 18, to get: 0.7853979452
Multiplying this by four give 3.1415917809 as even a much better value for Pi.
It seems important to again note that this is a simple pencil and paper calculation that only involves simple addition, multiplication and division, and no exotic math stuff! Impressive, huh?
Well, you are free to invest maybe an hour in doing this calculation for 26 intervals. You will get a result of 0.7853981634 and then 3.1415926535 for the value of Pi, which is accurate to all ten of these decimal points!

So, just in case you had thought that the original ancient Greek approach was still used, with polygons having billions of sides, in determining the extremely accurate values for Pi (sometimes given to 100 or one million decimals), now you know how it is actually done! Setting up a computer to do these simple additions, multiplications and divisions is pretty easy, and the only limitation then is the accuracy of the values of the numbers used in the computer. If you use a computer system that has 40 significant digits, even YOU could now quickly calculate Pi to 40 decimals!

Yet another modern way!
It turns out that there is a way to do this without having to rely on Simpson’s Rule. There is another Calculus proof that the quantity the Integral of 1/(1 + X2), or the Inverse Tangent of X, we used above can be expressed as an alternating infinite series, as:
X – X3/3 + X5/5 – X7/7 + X9/9 …
Since our values of X are zero and one, this simplifies. When X = 0 it is obvious that this is a sum of a lot of zeroes, or zero. For X = 1, this is just:
1 – 1/3 + 1/5 – 1/7 + 1/9 …
So the difference, the number we want that is Pi/4 is obviously just this last infinite series total. Sounds easy, huh? It is, except that you have to include a LOT of terms to get a very accurate value for Pi!
If you do this for a sum of the terms up to 1/41 (20 terms), you get a value for Pi of 3.0961615.
If you do this for a sum of the terms up to 1/4001 (2000 terms), you get a value for Pi of 3.1410931.
If you do this for a sum of the terms up to 1/400001 (200,000 terms), you get a value for Pi of 3.1415876.
If you do this for a sum of the terms up to 1/4000001 (2,000,000 terms), you get a value for Pi of 3.14159215.
If you do this for a sum of the terms up to 1/40000001 (20,000,000 terms), you get a value for Pi of 3.14159260.
That would be a LOT of additions and subtractions to get a value for Pi that still is not very impressive! We noted above that the actual value for Pi to ten decimals is 3.1415926535, so with this other method, our 20 million additions and subtractions still only get a precision to around 7 correct decimals. Not nearly as good as the Simpson’s Rule method above, even though it initially looks very attractive!
But we are showing that there are many ways to skin a cat! (figuratively speaking, of course!)

This presentation was first placed on the Internet in November 2006.
ImageThis page – – How to Calculate Pi, π, Simpson’s Rule – – is at http://mb-soft.com/public3/pi.html
This subject presentation was last updated on 01/12/2011 03:40:26

Link to the Index of these Public Service Pages
Image( http://mb-soft.com/index.html )

Useful SQL Server DBCC Commands

Handige link:
http://www.sql-server-performance.com/tips/dbcc_commands_p1.aspx
if the link should fail here is the content:

verder:
Useful SQL Server DBCC Commands
By : Brad McGehee

DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.Example:
DBCC CACHESTATS
Sample Results (abbreviated):
Object Name Hit Ratio
———— ————-
Proc 0.86420054765378507
Prepared 0.99988494930394334
Adhoc 0.93237136647793051
ReplProc 0.0
Trigger 0.99843452831887947
Cursor 0.42319205924058612
Exec Cxt 0.65279111666076906
View 0.95740334726893905
Default 0.60895011346896522
UsrTab 0.94985969576133511
SysTab 0.0
Check 0.67021276595744683
Rule 0.0
Summary 0.80056155581812771
Here’s what some of the key statistics from this command mean:
⦁ Hit Ratio: Displays the percentage of time that this particular object was found in SQL Server’s cache. The bigger this number, the better.
⦁ Object Count: Displays the total number of objects of the specified type that are cached.
⦁ Avg. Cost: A value used by SQL Server that measures how long it takes to compile a plan, along with the amount of memory needed by the plan. This value is used by SQL Server to determine if the plan should be cached or not.
⦁ Avg. Pages: Measures the total number of 8K pages used, on average, for cached objects.
⦁ LW Object Count, LW Avg Cost, WL Avg Stay, LW Ave Use: All these columns indicate how many of the specified objects have been removed from the cache by the Lazy Writer. The lower the figure, the better.
[7.0, 2000] Updated 9-1-2005*****DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.Example:
DBCC DROPCLEANBUFFERS
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC ERRORLOG: If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.Example:
DBCC ERRORLOG
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server. The database ID number to be affected must be entered as part of the command.You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = ‘database_name’)
DBCC FLUSHPROCINDB (@intDBID)
[7.0, 2000, 2005] Updated 9-1-2005*****DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn’t do a great job of logical defragmentation.The only way to truly reduce logical fragmentation is to rebuild your table’s indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.Example:
DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)
[2000] Updated 9-1-2005

DBCC FREEPROCCACHE: Used to clear out the stored procedure cache for all SQL Server databases. You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.Example:
DBCC FREEPROCCACHE
[7.0, 2000, 2005] Updated 10-16-2005*****DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity. This is an undocumented command, and one that may be dropped in future versions of SQL Server.Example:
DBCC MEMORYSTATUS
[7.0, 2000] Updated 10-16-2005*****DBCC OPENTRAN: An open transaction can leave locks open, preventing others from accessing the data they need in a database. This command is used to identify the oldest open transaction in a specific database.Example:
DBCC OPENTRAN(‘database_name’)
[7.0, 2000] Updated 10-16-2005*****DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.Example:
DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])
where:Dbid or dbname: Enter either the dbid or the name of the database in question.Pagenum: Enter the page number of the SQL Server page that is to be examined.Print option: (Optional) Print option can be either 0, 1, or 2. 0 – (Default) This option causes DBCC PAGE to print out only the page header information. 1 – This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page’s offset table. Each of the rows printed out will be separated from each other. 2 – This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 – This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 – (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 – If the page is to be a virtual page number. 1 – (Default) If the page is the logical page number.

[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server’s performance would be increased because I/O could be reduced on the server.The process of “pinning a table” is a way to tell SQL Server that we don’t want it to flush out data pages for specific named tables once they are read into the cache in the first place. This in effect keeps these database pages in the data cache all the time, which eliminates the process of SQL Server from having to read the data pages, flush them out, and reread them again when the time arrives. As you can imagine, this can reduce I/O for these pinned tables, boosting SQL Server’s performance.To pin a table, the command DBCC PINTABLE is used. For example, the script below can be run to pin a table in SQL Server:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC PINTABLE (@db_id, @tbl_id)
While you can use the DBCC PINTABLE directly, without the rest of the above script, you will find the script handy because the DBCC PINTABLE’s parameters refer to the database and table ID that you want to pin, not by their database and table name. This script makes it a little easier to pin a table. You must run this command for every table you want to pin.Once a table is pinned in the data cache, this does not mean that the entire table is automatically loaded into the data cache. It only means that as data pages from that table are needed by SQL Server, they are loaded into the data cache, and then stay there, not ever being flushed out to disk until you give the command to unpin the table using the DBCC UNPINTABLE. It is possible that part of a table, and not all of it, will be all that is pinned.When you are done with a table and you no longer want it pinned, you will want to unpin your table. To do so, run this example code:
DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID(‘Northwind’)
SET @tbl_id = OBJECT_ID(‘Northwind..categories’)
DBCC UNPINTABLE (@db_id, @tbl_id)
[6.5, 7.0, 2000]Updated 10-16-2005

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.Example:
DBCC PROCCACHE
[6.5, 7.0, 2000]Updated 10-16-2005*****DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.Database reorganizations can be done byscheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).Unfortunately, the DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease.Example:
DBCC DBREINDEX(‘table_name’, fillfactor)
or
–Script to automatically reindex all tables in a database

USE DatabaseName –Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT “Reindexing ” + @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fillfactor of 90%. You can substitute any number you want for the 90 in the above script. When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don’t need access to the tables being reorganized. [7.0, 2000]Updated 10-16-2005*****DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don’t have to manually look up the table name ID number and the index ID number.Example:
DBCC SHOWCONTIG (Table_id, IndexID)
Or:
–Script to identify table fragmentation

–Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)

–Set the table and index to be examined
SELECT @IndexName = ‘index_name’ –enter name of index
SET @ID = OBJECT_ID(‘table_name’) –enter name of table

–Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName

–Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.Example:
DBCC SHOW_STATISTICS (table_name, index_name)
[7.0, 2000] Updated 3-20-2006

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.Example:
DBCC SQLMGRSTATS
Sample Results:
Item Status
————————- ———–
Memory Used (8k Pages) 5446
Number CSql Objects 29098
Number False Hits 425490
Here’s what the above means:
⦁ Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.
⦁ Number CSql Objects: Measures the total number of cached Transact-SQL statements.
⦁ Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.
[2000] Added 4-17-2003*****DBCC SQLPERF(): This command includes both documented and undocumented options. Let’s take a look at all of them and see what they do.
DBCC SQLPERF (LOGSPACE)
This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.
DBCC SQLPERF (UMSSTATS)
This option (undocumented) returns data about SQL Server thread management.
DBCC SQLPERF (WAITSTATS)
This option (undocumented) returns data about wait types for SQL Server resources.
DBCC SQLPERF (IOSTATS)
This option (undocumented) returns data about outstanding SQL Server reads and writes.
DBCC SQLPERF (RASTATS)
This option (undocumented) returns data about SQL Server read-ahead activity.
DBCC SQLPERF (THREADS)
This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread. [7.0, 2000] Updated 3-20-2006*****DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)Statistic Value
——————————– ————————
Scheduler ID 0.0
num users 18.0
num runnable 0.0
num workers 13.0
idle workers 11.0
work queued 0.0
cntxt switches 2.2994396E+7
cntxt switches(idle) 1.7793976E+7
Scheduler ID 1.0
num users 15.0
num runnable 0.0
num workers 13.0
idle workers 10.0
work queued 0.0
cntxt switches 2.4836728E+7
cntxt switches(idle) 1.6275707E+7
Scheduler ID 2.0
num users 17.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1331447E+7
cntxt switches(idle) 1.6273097E+7
Scheduler ID 3.0
num users 16.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1110251E+7
cntxt switches(idle) 1.624729E+7
Scheduler Switches 0.0
Total Work 3.1632352E+7Below is an explanation of some of the key statistics above:
⦁ num users: This is the number of SQL Server threads currently in the scheduler.
⦁ num runnable: This is the number of actual SQL Server threads that are runnable.
⦁ num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.
⦁ idle workers: The number of workers that are currently idle.
⦁ cntxt switches: The number of context switches between runnable threads.
⦁ cntxt switches (idle): The number of context switches to the idle thread.
[2000] Added 4-17-2003

DBCC TRACEON & DBCC TRACEOFF: Used to turn on and off trace flags. Trace flags are often used to turn on and off specific server behavior or server characteristics temporarily. In rare occasions, they can be useful to troubleshooting SQL Server performance problems.Example:To use the DBCC TRACEON command to turn on a specified trace flag, use this syntax:
DBCC TRACEON (trace# [,…n])
To use the DBCC TRACEON command to turn off a specified trace flag, use this syntax:
DBCC TRACEOFF (trace# [,…n])
You can also use the DBCC TRACESTATUS command to find out which trace flags are currently turned on in your server using this syntax:
DBCC TRACESTATUS (trace# [,…n])
For specific information on the different kinds of trace flags available, search this website or look them up in Books Online. [6.5, 7.0, 2000] Updated 3-20-2006*****DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.Example:
DBCC UPDATEUSAGE (‘databasename’)
[7.0, 2000] Updated 3-20-2006 ImageImage

© 2000 – 2008 vDerivatives Limited All Rights Reserved. Image

Uitleg over herfst

Voor de weerkundigen begint de herfst op 1 september. Rond 23 september begint de astronomische herfst. Officieel gaat de herfst in op 21 september.

Rond 23 september staat de zon precies boven de evenaar waardoor dag en nacht overal op aarde even lang duren. Daarom begint de herfst meestal op 22 of 23 september. Officieel loopt de herfst van 21 september tot en met 20 december. Volgens de klimatologische indeling is de herfst al op 1 september begonnen en duurt het seizoen tot met 30 november.

Schrikkeljaar

De andere seizoenen beginnen meestal op de 21e. Dat komt onder andere doordat de baan van de aarde geen cirkel is. De verschillende data zijn het gevolg van het schrikkeljaar. Eens in de vier jaar telt een jaar, het schrikkeljaar, een dag meer dan de andere jaren. In het begin van de twintigste eeuw viel de eerste herfstdag enkele keren zelfs op 24 september, het laatst in 1931.

Wintertijd

De wintertijd gaat pas in het laatste weekeinde van oktober in. Dat sluit beter aan bij het weer omdat het de in deze tijd nog prima nazomerweer kan zijn.

Op de eerste dag van lente en herfst duren dag en nacht overal even lang (Bron: Meteosat MSG)

Nazomer

Het weer stoort zich niet aan deze data en tot laat in de herfst kan het nog zomers zijn. Zelfs in oktober zijn de laatste jaren in ons land nog regelmatig temperaturen gemeten tussen 20 en 25 graden gemeten.

Depressies

Wel worden de verschillen in temperatuur tussen het noordelijk halfrond en de tropen steeds groter. Hierdoor kunnen zich diepere depressies vormen die veel wind veroorzaken. Een heel rustig en nevelig weertype is echter ook karakteristiek voor het najaar.

Release artifacts and artifact sources

https://docs.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=vsts

18 minutes to read

Contributors

Alex Homer

Steve Danielson

https://github.com/elbatk

David Staheli

Note

Build and release pipelines are called definitions in TFS 2018 and in older versions. Service connections are called service endpoints in TFS 2018 and in older versions.

A release is a collection of artifacts in your DevOps CI/CD processes. An artifact is a deployable component of your application. Azure Pipelines can deploy artifacts that are produced by a wide range of artifact sources, and stored in different types of artifact repositories.

When authoring a release pipeline, you link the appropriate artifact sources to your release pipeline. For example, you might link an Azure Pipelines build pipeline or a Jenkins project to your release pipeline.

When creating a release, you specify the exact version of these artifact sources; for example, the number of a build coming from Azure Pipelines, or the version of a build coming from a Jenkins project.

After a release is created, you cannot change these versions. A release is fundamentally defined by the versioned artifacts that make up the release. As you deploy the release to various stages, you will be deploying and validating the same artifacts in all stages.

A single release pipeline can be linked to multiple artifact sources, of which one is the primary source. In this case, when you create a release, you specify individual versions for each of these sources.

Artifacts in a pipeline and release

Artifacts are central to a number of features in Azure Pipelines. Some of the features that depend on the linking of artifacts to a release pipeline are:

  • Auto-trigger releases. You can configure new releases to be automatically created whenever a new version of an artifact is produced. For more details, see Continuous deployment triggers. Note that the ability to automatically create releases is available for only some artifact sources.
  • Trigger conditions. You can configure a release to be created automatically, or the deployment of a release to a stage to be triggered automatically, when only specific conditions on the artifacts are met. For example, you can configure releases to be automatically created only when a new build is produced from a certain branch.
  • Artifact versions. You can configure a release to automatically use a specific version of the build artifacts, to always use the latest version, or to allow you to specify the version when the release is created.
  • Artifact variables. Every artifact that is part of a release has metadata associated with it, exposed to tasks through variables. This metadata includes the version number of the artifact, the branch of code from which the artifact was produced (in the case of build or source code artifacts), the pipeline that produced the artifact (in the case of build artifacts), and more. This information is accessible in the deployment tasks. For more details, see Artifact variables.
  • Work items and commits. The work items or commits that are part of a release are computed from the versions of artifacts. For example, each build in Azure Pipelines is associated with a set of work items and commits. The work items or commits in a release are computed as the union of all work items and commits of all builds between the current release and the previous release. Note that Azure Pipelines is currently able to compute work items and commits for only certain artifact sources.
  • Artifact download. Whenever a release is deployed to a stage, by default Azure Pipelines automatically downloads all the artifacts in that release to the agent where the deployment job runs. The procedure to download artifacts depends on the type of artifact. For example, Azure Pipelines artifacts are downloaded using an algorithm that downloads multiple files in parallel. Git artifacts are downloaded using Git library functionality. For more details, see Artifact download.

Artifact sources

There are several types of tools you might use in your application lifecycle process to produce or store artifacts. For example, you might use continuous integration systems such as Azure Pipelines, Jenkins, or TeamCity to produce artifacts. You might also use version control systems such as Git or TFVC to store your artifacts. Or you can use repositories such as Package Management in Visual Studio Team Services or a NuGet repository to store your artifacts. You can configure Azure Pipelines to deploy artifacts from all these sources.

By default, a release created from the release pipeline will use the latest version of the artifacts. At the time of linking an artifact source to a release pipeline, you can change this behavior by selecting one of the options to use the latest build from a specific branch by specifying the tags, a specific version, or allow the user to specify the version when the release is created from the pipeline.

Adding an artifact

If you link more than one set of artifacts, you can specify which is the primary (default).

Selecting a default version option

The following sections describe how to work with the different types of artifact sources.


Azure Pipelines

You can link a release pipeline to any of the build pipelines in Azure Pipelines or TFS project collection.

Note

You must include a Publish Artifacts task in your build pipeline. For XAML build pipelines, an artifact with the name drop is published implicitly.

Some of the differences in capabilities between different versions of TFS and Azure Pipelines are:

  • TFS 2015: You can link build pipelines only from the same project of your collection. You can link multiple definitions, but you cannot specify default versions. You can set up a continuous deployment trigger on only one of the definitions. When multiple build pipelines are linked, the latest builds of all the other definitions are used, along with the build that triggered the release creation.
  • TFS 2017 and newer and Azure Pipelines: You can link build pipelines from any of the projects in Azure Pipelines or TFS. You can link multiple build pipelines and specify default values for each of them. You can set up continuous deployment triggers on multiple build sources. When any of the builds completes, it will trigger the creation of a release.

The following features are available when using Azure Pipelines sources:

Feature Behavior with Azure Pipelines sources
Auto-trigger releases New releases can be created automatically when new builds (including XAML builds) are produced. See Continuous Deployment for details. You do not need to configure anything within the build pipeline. See the notes above for differences between version of TFS.
Artifact variables A number of artifact variables are supported for builds from Azure Pipelines.
Work items and commits Azure Pipelines integrates with work items in TFS and Azure Pipelines. These work items are also shown in the details of releases. Azure Pipelines integrates with a number of version control systems such as TFVC and Git, GitHub, Subversion, and external Git repositories. Azure Pipelines shows the commits only when the build is produced from source code in TFVC or Git.
Artifact download By default, build artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.
Deployment section in build The build summary includes a Deployment section, which lists all the stages to which the build was deployed.

TFVC, Git, and GitHub

There are scenarios in which you may want to consume artifacts stored in a version control system directly, without passing them through a build pipeline. For example:

  • You are developing a PHP or a JavaScript application that does not require an explicit build pipeline.
  • You manage configurations for various stages in different version control repositories, and you want to consume these configuration files directly from version control as part of the deployment pipeline.
  • You manage your infrastructure and configuration as code (such as Azure Resource Manager templates) and you want to manage these files in a version control repository.

Because you can configure multiple artifact sources in a single release pipeline, you can link both a build pipeline that produces the binaries of the application as well as a version control repository that stores the configuration files into the same pipeline, and use the two sets of artifacts together while deploying.

Azure Pipelines integrates with Team Foundation Version Control (TFVC) repositories, Git repositories, and GitHub repositories.

You can link a release pipeline to any of the Git or TFVC repositories in any of the projects in your collection (you will need read access to these repositories). No additional setup is required when deploying version control artifacts within the same collection.

When you link a Git or GitHub repository and select a branch, you can edit the default properties of the artifact types after the artifact has been saved. This is particularly useful in scenarios where the branch for the stable version of the artifact changes, and continuous delivery releases should use this branch to obtain newer versions of the artifact. You can also specify details of the checkout, such as whether check out submodules and LFS-tracked files, and the shallow fetch depth.

When you link a TFVC branch, you can specify the changeset to be deployed when creating a release.

The following features are available when using TFVC, Git, and GitHub sources:

Feature Behavior with TFVC, Git, and GitHub sources
Auto-trigger releases You can configure a continuous deployment trigger for pushes into the repository in a release pipeline. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for version control sources.
Work items and commits Azure Pipelines cannot show work items or commits associated with releases when using version control artifacts.
Artifact download By default, version control artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Jenkins

To consume Jenkins artifacts, you must create a service connection with credentials to connect to your Jenkins server. For more details, see service connections and Jenkins service connection. You can then link a Jenkins project to a release pipeline. The Jenkins project must be configured with a post build action to publish the artifacts.

The following features are available when using Jenkins sources:

Feature Behavior with Jenkins sources
Auto-trigger releases You can configure a continuous deployment trigger for pushes into the repository in a release pipeline. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for builds from Jenkins.
Work items and commits Azure Pipelines cannot show work items or commits for Jenkins builds.
Artifact download By default, Jenkins builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Artifacts generated by Jenkins builds are typically propagated to storage repositories for archiving and sharing. Azure blob storage is one of the supported repositories, allowing you to consume Jenkins projects that publish to Azure storage as artifact sources in a release pipeline. Deployments download the artifacts automatically from Azure to the agents. In this configuration, connectivity between the agent and the Jenkins server is not required. Microsoft-hosted agents can be used without exposing the server to internet.

Note

Azure Pipelines may not be able to contact your Jenkins server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with Jenkins by setting up an on-premises agent that can access the Jenkins server. You will not be able to see the name of your Jenkins projects when linking to a build, but you can type this into the link dialog field.

For more information about Jenkins integration capabilities, see Azure Pipelines Integration with Jenkins Jobs, Pipelines, and Artifacts.


Azure Container Registry, Docker, Kubernetes

When deploying containerized apps, the container image is first pushed to a container registry. After the push is complete, the container image can be deployed to the Web App for Containers service or a Docker/Kubernetes cluster. You must create a service connection with credentials to connect to your service to deploy images located there, or to Azure. For more details, see service connections.

The following features are available when using Azure Container Registry, Docker, Kubernetes sources:

Feature Behavior with Docker sources
Auto-trigger releases You can configure a continuous deployment trigger for images. This can automatically trigger a release when a new commit is made to a repository. See Triggers.
Artifact variables A number of artifact variables are supported for builds.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

NuGet and npm packages from Package Management

To integrate with NuGet, or npm (Maven is not currently supported), you must first assign licenses for the Package Management extension from the Marketplace. For more information, see the Package Management Overview.

Scenarios where you may want to consume Package Management artifacts are:

  1. You have your application build (such as TFS, Azure Pipelines, TeamCity, Jenkins) published as a package (NuGet or npm) to Package Management and you want to consume the artifact in a release.
  2. As part of your application deployment, you need additional packages stored in Package Management.

When you link a Package Management artifact to your release pipeline, you must select the Feed, Package, and the Default version for the package. You can choose to pick up the latest version of the package, use a specific version, or select the version at the time of release creation. During deployment, the package is downloaded to the agent folder and the contents are extracted as part of the job execution.

The following features are available when using Package Management sources:

Feature Behavior with Package Management sources
Auto-trigger releases You can configure a continuous deployment trigger for packages. This can automatically trigger a release when a package is updated. See Triggers.
Artifact variables A number of artifact variables are supported for packages.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, packages are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

External or on-premises TFS

You can use Azure Pipelines to deploy artifacts published by an on-premises TFS server. You don’t need to make the TFS server visible on the Internet; you just set up an on-premises automation agent. Builds from an on-premises TFS server are downloaded directly into the on-premises agent, and then deployed to the specified target servers. They will not leave your enterprise network. This allows you to leverage all of your investments in your on-premises TFS server, and take advantage of the release capabilities in Azure Pipelines.

Using this mechanism, you can also deploy artifacts published in one Azure Pipelines subscription in another Azure Pipelines, or deploy artifacts published in one Team Foundation Server from another Team Foundation Server.

To enable these scenarios, you must install the TFS artifacts for Azure Pipelines extension from Visual Studio Marketplace. Then create a service connection with credentials to connect to your TFS server (see service connections for details).

You can then link a TFS build pipeline to your release pipeline. Choose External TFS Build in the Type list.

The following features are available when using external TFS sources:

Feature Behavior with external TFS sources
Auto-trigger releases You cannot configure a continuous deployment trigger for external TFS sources in a release pipeline. To automatically create a new release when a build is complete, you would need to add a script to your build pipeline in the external TFS server to invoke Azure Pipelines REST APIs and to create a new release.
Artifact variables A number of artifact variables are supported for external TFS sources.
Work items and commits Azure Pipelines cannot show work items or commits for external TFS sources.
Artifact download By default, External TFS artifacts are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Note

Azure Pipelines may not be able to contact an on-premises TFS server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with TFS by setting up an on-premises agent that can access the TFS server. You will not be able to see the name of your TFS projects or build pipelines when linking to a build, but you can type these into the link dialog fields. In addition, when you create a release, Azure Pipelines may not be able to query the TFS server for the build numbers. Instead, type the Build ID (not the build number) of the desired build into the appropriate field, or select the Latest build.


TeamCity

To integrate with TeamCity, you must first install the TeamCity artifacts for Azure Pipelines extension from Marketplace.

To consume TeamCity artifacts, start by creating a service connection with credentials to connect to your TeamCity server (see service connections for details).

You can then link a TeamCity build configuration to a release pipeline. The TeamCity build configuration must be configured with an action to publish the artifacts.

The following features are available when using TeamCity sources:

Feature Behavior with TeamCity sources
Auto-trigger releases You cannot configure a continuous deployment trigger for TeamCity sources in a release pipeline. To create a new release automatically when a build is complete, add a script to your TeamCity project that invokes the Azure Pipelines REST APIs to create a new release.
Artifact variables A number of artifact variables are supported for builds from TeamCity.
Work items and commits Azure Pipelines cannot show work items or commits for TeamCity builds.
Artifact download By default, TeamCity builds are downloaded to the agent. You can configure an option in the stage to skip the download of artifacts.

Note

Azure Pipelines may not be able to contact your TeamCity server if, for example, it is within your enterprise network. In this case you can integrate Azure Pipelines with TeamCity by setting up an on-premises agent that can access the TeamCity server. You will not be able to see the name of your TeamCity projects when linking to a build, but you can type this into the link dialog field.


Other sources

Your artifacts may be created and exposed by other types of sources such as a NuGet repository. While we continue to expand the types of artifact sources supported in Azure Pipelines, you can start using it without waiting for support for a specific source type. Simply skip the linking of artifact sources in a release pipeline, and add custom tasks to your stages that download the artifacts directly from your source.


Artifact download

When you deploy a release to a stage, the versioned artifacts from each of the sources are, by default, downloaded to the automation agent so that tasks running within that stage can deploy these artifacts. The artifacts downloaded to the agent are not deleted when a release is completed. However, when you initiate the next release, the downloaded artifacts are deleted and replaced with the new set of artifacts.

A new unique folder in the agent is created for every release pipeline when you initiate a release, and the artifacts are downloaded into that folder. The $(System.DefaultWorkingDirectory) variable maps to this folder.

Note that, at present, Azure Pipelines does not perform any optimization to avoid downloading the unchanged artifacts if the same release is deployed again. In addition, because the previously downloaded contents are always deleted when you initiate a new release, Azure Pipelines cannot perform incremental downloads to the agent.

You can, however, instruct Azure Pipelines to skip the automatic downloadof artifacts to the agent for a specific job and stage of the deployment if you wish. Typically, you will do this when the tasks in that job do not require any artifacts, or if you implement custom code in a task to download the artifacts you require.

In Azure Pipelines, you can, however, select which artifacts you want to download to the agent for a specific job and stage of the deployment. Typically, you will do this to improve the efficiency of the deployment pipeline when the tasks in that job do not require all or any of the artifacts, or if you implement custom code in a task to download the artifacts you require.

Selecting the artifacts to download

Artifact source alias

To ensure the uniqueness of every artifact download, each artifact source linked to a release pipeline is automatically provided with a specific download location known as the source alias. This location can be accessed through the variable:

$(System.DefaultWorkingDirectory)\[source alias]

This uniqueness also ensures that, if you later rename a linked artifact source in its original location (for example, rename a build pipeline in Azure Pipelines or a project in Jenkins), you don’t need to edit the task properties because the download location defined in the agent does not change.

The source alias is, by default, the name of the source selected when you linked the artifact source, prefixed with an underscore; depending on the type of the artifact source this will be the name of the build pipeline, job, project, or repository. You can edit the source alias from the artifacts tab of a release pipeline; for example, when you change the name of the build pipeline and you want to use a source alias that reflects the name of the build pipeline.

The source alias can contain only alphanumeric characters and underscores, and must start with a letter or an underscore

Primary source

When you link multiple artifact sources to a release pipeline, one of them is designated as the primary artifact source. The primary artifact source is used to set a number of pre-defined variables. It can also be used in naming releases.

Artifact variables

Azure Pipelines exposes a set of pre-defined variables that you can access and use in tasks and scripts; for example, when executing PowerShell scripts in deployment jobs. When there are multiple artifact sources linked to a release pipeline, you can access information about each of these. For a list of all pre-defined artifact variables, see variables.

Help and support

Pensioenfondsen korten

Toch wel opmerkelijk

Kortingen op komst bij grote pensioenfondsen

Laatste update: 19 januari 2012 11:24 info

RIJSWIJK – Drie van de vijf grootste pensioenfondsen van ons land moeten aanvullende maatregelen nemen om hun dekkingstekorten weg te werken.

Foto: ANP

Als substantieel herstel van hun financiële situatie dit jaar uitblijft, zullen de metaalpensioenfondsen PME en PMT en het ambtenarenpensioenfonds ABP volgend jaar het mes moeten zetten in de pensioenen.

ABP, het grootste Nederlandse pensioenfonds, ziet een verlaging van de pensioenen met een half procent als ”een reële optie”.

Bij PMT zal de korting naar verwachting 6 tot 7 procent bedragen. PME gaf nog geen indicatie van de te verwachten korting. Besluiten daarover zullen bij alle drie de fondsen in februari genomen, of ze daadwerkelijk worden uitgevoerd is afhankelijk van de dekkingsgraden eind dit jaar.

ABP

De dekkingsgraad bij ABP verbeterde in het laatste kwartaal van 2011 tot 94 procent, maar dat herstel is volgens de regels van De Nederlandsche Bank (DNB) te mager.

”Als nieuwe voorzitter van ABP had ik onze deelnemers heel graag beter nieuws gebracht, maar helaas zijn wij bij deze dekkingsgraad genoodzaakt aanvullende maatregelen te treffen”, aldus voorzitter Henk Brouwer.

Zorg en Welzijn

Pensioenfonds Zorg en Welzijn (PFZW) hoeft vooralsnog niet in te grijpen in de pensioenuitkeringen. ”We hoeven op dit moment geen pijnlijke extra maatregelen te nemen”, zei directeur Peter Borgdorff.

Ook het Bedrijfstakpensioenfonds voor de Bouwnijverheid (BPF Bouw) hoeft de pensioenen waarschijnlijk niet te verlagen, mits de situatie op de financiële markten dit jaar verbetert.

—-

AMSTERDAM – Hoewel de dekkingsgraden door de lage rentestand dalen, is het totale vermogen van alle Nederlandse pensioenfondsen tijdens het afgelopen jaar naar een record van 875 miljard euro gegroeid.

Dat blijkt uit berekeningen die televisiezender RTL Z maakte op basis van gepubliceerde “en nog niet gepubliceerde” data. Begin 2011 stond de teller nog op 801 miljard euro, waarna er dankzij winsten op obligatieportefeuilles 9 procent aan vermogen werd gewonnen.

Het zorgpensioenfonds PFZW zag zijn vermogen het sterkst toenemen, van 99,5 miljard euro naar 110,7 milard euro (een rendement van 11,3 procent).

Het ambtenarenfonds ABP kreeg er 3,8 procent bij en heeft nu 246 miljard euro in kas. Het ABP kondigde deze week net als veel andere fondsen kortingen en stijgende pensioenpremies aan, doordat de dekkingsgraad onder invloed van de lage rente daalt.

Aanvullende maatregelen

De dekkingsgraad van ABP is over heel 2011 met 11 procentpunt gedaald en bedraagt nu 94 procent. Drie van de vijf grootste pensioenfondsen moeten aanvullende maatregelen nemen om hun dekkingstekorten weg te werken als financieel herstel uitblijft.

De gemiddelde dekkingsgraad van de Nederlandse pensioenfondsen zal de komende maanden bij een gelijkblijvende rentestand verder dalen tot 93 procent, zo meldde pensioenadviseur Mercer woensdag.

Nederlands pensioenbezit 875 miljard euro

Nederlands pensioenbezit 875 miljard euro

20 JAN 2012( LAATSTE UPDATE: 21 JAN 2012 )

De Nederlandse pensioenfondsen zijn nog nooit zo rijk geweest als nu, het geld klotst over de reling.

Eind 2011 liep het totale vermogen op tot 875 miljard euro, een record!

Meer dan ons Nationaal Inkomen

Het jaar 2011 begonnen de gezamenlijke pensioenfondsen met een totaal beschikbaar vermogen van 801 miljard euro. Door de gigantische winsten op de obligatieportefeuilles zijn de fondsen vorig jaar 74 miljard euro rijker geworden, een plus van ruim 9%. Alleen al het laatste kwartaal leverde 40 miljard euro op.

Dat blijkt uit voorlopige berekeningen van RTL Z op basis van gepubliceerde en nog niet gepubliceerde data.

Het pensioenbezit is de afgelopen verdubbeld en maakt van Nederland een rijk land. Het pensioenvermogen is ruim 140% van ons BNP van 600 miljard euro per jaar en dat is best uniek!

Opvallende en noemenswaardige fondsen:

.

Jong en rijk

Pensioenfonds voor de Zorg (PFZW) heeft de grootste klapper van het jaar gemaakt. Het ‘groene’ fonds met veel jonge werknemers werd maar liefst 11,2 miljard euro rijker. Het fondsvermogen steeg van 99,5 tot 110,7 miljard, een plus van 11,3%. PFZW heeft 31% rendement gemaakt op de obligatieportefeuille en de rente- en inflatieafdekking. De daling van de rente bracht de dekkingsgraad weliswaar naar 97%, maar de kas van de penningmeester is nog nooit zo goed gevuld geweest.

Oud en grijs

Het veel ‘grijzere’ fonds ABP zag zijn vermogen ‘slechts’ toenemen met 9 miljard euro (3,8%) van 237 tot 246 miljard euro. Dat is wel een recordhoogte. Het ABP heeft minder geprofiteerd van rentedalingen dan de andere fondsen, het fonds had minder rentederivaten gekocht.

Noodlijdend

Opvallend ook is dat het noodlijdende metaalfonds PME het bezit heeft zien stijgen met 14%. Het fondsvermogen steeg van 22,6 tot 25,8 miljard euro. PME heeft echter een dekkingsgraad van slechts 90%, het fonds moet daarom in april 2013 6 tot 7% korten op de pensioenen.

Gezond

Het vrij gezonde pensioenfonds voor de medewerkers van de Spoorwegen is dit jaar niet veel opgeschoten in vermogen. Het totale bezit steeg slechts van 10,86 miljard naar 11,18 miljard, een plus van bijna 3%. De deelnemers kunnen overigens opgelucht slapen, de dekkingsgraad is 113%. Het fonds hoeft daarom niet af te stempelen, het zou volgend jaar de pensioenen zelfs kunnen indexeren voor inflatie. Zoekt u een baan, dan zou u conducteur kunnen worden…

45% pensioen premie

Shell komt volgende week met de jaarcijfers. Het eigen pensioenfonds zit daarom nu in de ‘silent periode’ en wil het pensioenvermogen niet bekend maken. Het fonds had eind 2010 een vermogen van 17,5 miljard euro met een dekkingsgraad van 123%.

De dekkingsgraad is in 2011 gedaald naar 111%, met een ontoereikend vermogen op lange termijn. Derhalve heeft het bestuur besloten de pensioenpremie te verhogen naar 45% van de pensioengrondslag (het salaris min de zogenoemde franchise). De werkgever betaalt 41,6% premie. De werknemer met een salaris tot 77.000 euro betaalt zelf slechts 2%, daarboven 8%. Voor uw carrière kunt u dus misschien het beste aan Shell denken, pomphouder?

Nationaal vermogen gestegen door grotere pensioenpot

24-2-2016 05:30

Het vermogen van Nederland is sinds het begin van de crisis gegroeid. Dat komt vooral doordat het vermogen van pensioenfondsen en verzekeraars sinds 2008 met ruim 615 miljard euro groeide tot 1,7 biljoen euro in het derde kwartaal van 2015. Dat meldt CBS op basis van een vandaag voor het eerst gepubliceerd overzicht.

lees hier verder: www.cbs.nl

of hier:

Het netto nationaal vermogen is het vermogen van de Nederlandse huishoudens, bedrijven en de Nederlandse overheid samen, ofwel de Nederlandse bezittingen minus de schulden. Dat vermogen bedroeg eind 2014 bijna 3,7 biljoen euro. Dat is 292 miljard euro meer dan eind 2008.

Het netto vermogen van Nederland is een combinatie van financiële bezittingen (zoals aandelen, obligaties en deposito’s) en materiële bezittingen, minus schulden. De waarde van die materiële of niet-financiële bezittingen (zoals vastgoed, wegen en machines) is sinds 2008 afgenomen. Dat kwam onder meer doordat onroerend goed minder waard werd nadat de woningmarkt inzakte. De stijging van het totaal is dus volledig het gevolg van de toename van die andere tak: het financiële bezit.

Financieel bezit

Het financiële bezit van Nederland ten opzichte van het buitenland wordt gemeten door de Nederlandse vorderingen en schulden in het buitenland van elkaar af te trekken. Het bedrag dat daar uitkomt is het zogeheten Nederlandse externe vermogen.

Dat externe vermogen lag eind 2008 48 miljard euro in de min: Nederland was het buitenland meer verschuldigd dan andersom. Maar aan het einde van het derde kwartaal van 2015 was het externe vermogen gestegen naar een overschot van liefst 568 miljard euro. Het gaat hier alleen om de vermogensposities ten opzichte van het buitenland: de binnenlandse vorderingen en schulden zijn in de berekening tegen elkaar weggestreept. Een lening van bijvoorbeeld een Nederlands bedrijf bij een Nederlandse bank is immers een schuld bij de ene partij (het bedrijf) en een even grote vordering bij de andere partij (de bank).

Door het nieuwe overzicht is nu per sector duidelijk welke vorderingen en schulden ze onderling en met het buitenland hebben, waardoor beter inzichtelijk wordt waar die stijging tussen 2008 en 2015 vandaan komt. De toename van het netto extern vermogen ligt vooral bij de pensioenfondsen, maar is daarom indirect naar de huishoudens terug te voere

Meer vermogen nodig door lage rente

Nederlandse huishoudens hebben sinds 2008 hun aanspraken op verzekeraars en pensioenfondsen sterk zien groeien. Eind september 2015 bedroeg dit bijna 1,5 biljoen euro, tegen 858 miljard euro eind 2008. Het gaat voor het merendeel om pensioenaanspraken. Die aanspraken staan hier voor het totale vermogen waarover verzekeraars en pensioenfondsen moeten beschikken om aan hun pensioenverplichtingen te kunnen voldoen.

De stijging in aanspraken komt vooral doordat de rekenrente gedaald is. Die rente vormt een maat voor het toekomstig rendement van pensioenfondsen en daarmee voor het benodigde vermogen dat de fondsen moeten aanhouden om toekomstbestendig te blijven. Hoe lager de rekenrente, hoe hoger het vereiste vermogen. Hoe hoger het vermogen, hoe hoger de huidige waarde van de pensioenaanspraken van huishoudens. Fondsen hebben de afgelopen jaren ook meer vermogen moeten aanhouden vanwege aanpassingen in de levensverwachting, maar de invloed daarvan is veel kleiner dan van de rentedaling.

Pensioenfondsen moeten sinds de crisis dus veel grotere vermogens aanhouden om aan de toekomstige verplichtingen te kunnen blijven voldoen. Het vermogen van pensioenfondsen en verzekeraars, die ook een deel van het pensioenvermogen beheren, steeg dus tussen eind 2008 en september 2015 met ruim 615 miljard euro naar 1,7 biljoen euro.

Toename vermogen

Er zijn een aantal factoren waardoor de vermogens van de fondsen sinds 2008 zijn gegroeid. Ten eerste daalde de kapitaalmarktrente na 2008 vrijwel onafgebroken. De obligaties die pensioenfondsen in handen hadden, werden meer waard. Dat kwam doordat de couponrente op oude obligaties hoger was dan de marktrente. Daarnaast stegen aandelenkoersen en werden buitenlandse beleggingen meer waard doordat de meeste buitenlandse valuta’s, zoals de Amerikaanse dollar, de Zwitserse frank en het Britse pond, in waarde stegen ten opzichte van de euro. Tot slot verhoogden fondsen hun premies en stegen pensioenuitkeringen niet mee met de inflatie of werd er zelfs op gekort.

Verzekeraars en pensioenfondsen beleggen lang niet alles zelf, maar brachten de voorbije jaren een deel van hun kapitaal onder bij beleggingsinstellingen, waardoor hun aanspraken op die instellingen stegen. De aanspraken van beleggingsinstellingen op het buitenland namen weer fors toe doordat ze het door de pensioenfondsen ingebrachte geld voor een groot deel buiten Nederland gingen investeren. Overigens zijn de directe beleggingen van pensioenfondsen in het buitenland ook toegenomen.

Omdat pensioenfondsen (en beleggingsinstellingen) nauwelijks financiële verplichtingen (schulden) in het buitenland hebben, heeft de groei van het pensioenvermogen een groot effect gehad op de toename van het externe vermogen.

Europese Unie

Een negatief extern vermogen geeft aan dat een land financieel kwetsbaar is. Het land heeft dan immers veel schulden in het buitenland, wat vaak gepaard gaat met langdurige rentebetalingen en aflossingen. Daarom heeft de Europese Commissie dit vermogen opgenomen in een scorebord dat macro-economische onevenwichtigheden bij landen moet meten. Het rapport daarover komt binnenkort uit.

Nederland heeft, gemeten naar de grootte van de economie, het hoogste netto extern vermogen van Europa. De omvang en opzet van het Nederlandse pensioenstelsel, dat tamelijk uniek is in Europa, is hier voor een belangrijk deel de oorzaak van. Overigens moet een hoog netto extern vermogen niet worden verward met de rijkdom van een land. Het gaat hier slechts om de financiële positie ten opzichte van het buitenland: binnenlands opgebouwd vermogen wordt niet meegeteld.

How to use an SQL query with parameters in Excel

1 Intro

2 Set up the query to run as a stored procedure

3 Prepare Excel

4 Make ODBC connection to database in SQL

5 Prepare “Microsoft query”

6 Link fields to query parameters

Simple example query:

declare @StartDate datetime set @StartDate = ‘2018-01-01’

declare @EndDate datetime set @EndDate = ‘2018-01-31’

select * from tblOrder where orderdate <= @StartDate and orderdate >= @EndDate

Create and run a script that creates a stored procedure:

CREATE PROCEDURE spSelectOrder

— Add the parameters for the stored procedure here

@StartDate As DateTime,

@EndDate As DateTime

AS

BEGIN

— SET NOCOUNT ON added to prevent extra result sets from

— interfering with SELECT statements.

SET NOCOUNT ON;

— Insert statements for procedure here

select * from tblOrder where orderdate <= @StartDate and

orderdate >= @EndDate

END

GO

This stored procedure can be run in Excel using Microsoft query. To run this query, prepare a worksheet with the parameters filled. These parameters will be used as input for the query later.

Next step is to add the data source to the worksheet. Start Data, “From Other Sources” “From Microsoft Query”. This will start a wizard to create a data connection:

1 Select or create a Data Source

2 The next step in the wizard is Choose Columns. Cancel the wizard on this screen and a question will pop up asking you if you want to continue editing this query. Click Yes.

3 MS Query will be started and a dialog to select tables will be presented. Close this dialog.

4 Click SQL button in button bar or in menu choose View, SQL.

5 Type “call” followed by the stored procedure name and question marks for the parameter input between parentheses. Place this in between curly bracets. These curly bracets are required to avoid syntax check errors.

{call spMassBalans (?, ?)}

6 Press OK and you will be prompted to input values for the required parameters.

The results will be retrieved in a query result window. After the result is presented go to File and click “Return Data to Microsoft Excel”.

Microsoft query will be closed and you will return to Excel with a dialog to import the data. Choose cell A4 in the work sheet.

Again you will be prompted to input values for the parameters. This time you are able to select the cells B1 and B2 as input. Check the checkbox in the dialog to use the reference for future use. If you want you can also check to refresh the data when the cell value changes.

If you want to manually refresh the data you can right-click anywhere in the datagrid and select “refresh”.

How to develop with various app.config files

When I want to test my application in Microsoft Visual Studio I want to use a alternate app.config from production. In my development environment I have other connection strings etc.

To switch between development and production configuration I have come up with the following configuration:

I want to switch configuration in the IDE and use the apropriate app.config file.

For each environment I have added a separate app.config.file to my project.

Next I have added a pre-build.bat and a post-build.bat in the solution.

The pre-build.bat file contains one command to replace app.config for the selected configuration:

copy /y %1\app.config.%2 %1\app.config

The post-build.bat file contains one command to replace the app.config with the release or “production” configuration.

Last step is to tie these components all together. Add a Pre-build and a Post-build event in the project properties page. You should be able to find these settings right clicking the project and click on properties.

The command I added to start pre-build.bat is:

“$(ProjectDir)pre-build.bat” “$(ProjectDir)” “$(ConfigurationName)”

and for the post-build.bat the command is:

“$(ProjectDir)post-build.bat” “$(ProjectDir)”

De vloot

We bouwen en vliegen met de hele familie.

Mijn vader is een poosje geleden (begin 80-er jaren) begonnen met balsa vliegtuigen bij een bouw en vliegclub (BVL) ergens tussen Oud-Gastel en Moerstraten. De club verhuisde daarna vrij snel naar Kruisland.

Ik en mijn broer gingen ook regelmatig mee maar waren toen nog te jong om zelf te mogen en kunnen vliegen.

Later hebben we met zijn drieën het vliegen opnieuw opgepakt en zijn we lid geworden van een vliegclub in Rilland Bath (Alouette). Dit laatste vliegveld was echter een heel eind bij ons vandaan waardoor dat op de lange duur niet echt praktisch bleek.

Sinds 2015 zijn we overgestapt op elektrische motorvliegtuigmodellen. Mijn neefje heeft ook de smaak te pakken gekregen en vliegt nu met ons mee.