Visual Studio 11 Beta and .NET 4.5 Beta Available Now!

I am very happy to announce that the Visual Studio 11 Beta and .NET 4.5 Beta releases are now available for download.

I previously blogged about some of the innovation that went into the Visual Studio 11 Developer Preview releases at //BUILD/:

Since //BUILD/, we’ve been hard at work on the beta releases, and I’m excited at the improvements that have been made on the feature sets previously shown, as well as on the slew of new functionality and value that’s been added.

For a more in-depth view on these beta releases, see Jason Zander’s blog.  And for those of you yearning to learn more about how to build great apps for Windows 8, check out the new Windows 8 app developer blog.

Since the Visual Studio 11 “sneak peek” event held on Thursday, we’ve been listening intently to all of the feedback you’ve provided.  We very much appreciate the comments you’ve shared with us, and we look forward to hearing more from you once you’ve begun using the beta bits.

Visual Studio 11 Beta and .NET 4.5 Beta Available Now!

The Microsoft C++ Compiler Turns 20!

This month, we enter the third decade of C++ at Microsoft.

It was twenty years ago, in February of 1992, that we released our first C++ compiler: Microsoft C/C++ 7.0. before then, we already worked with several of the C++ “preprocessor” compilers that took C++ and converted it to C before our compiler then created the executable program. but starting in 1992, Microsoft’s premier native compiler supported C++ directly, and has done so ever since.

C/C++ 7.0 shipped in a box that was over two feet long and produced MS-DOS, Windows and OS/2 applications. It also sported the last of the character oriented development environments for C that we ever shipped – the following product was Visual C++, which built on what we had learned from delivering QuickC. Since those early days, we have shipped eleven major releases of C/C++ products (ignoring small point upgrades) for both Windows and embedded development.

This month, on the 20th anniversary of our first C++ compiler, we’re looking forward to shipping the beta of Visual C++ 11. It includes support for ARM processors, Windows 8 tablet apps, C++ AMP for heterogeneous parallel computing, automatic parallelization, and the complete ISO C++11 standard library… and a few more of the new C++11 language features too.

Last summer, we pledged to publish the C++ AMP specification as an open specification that any compiler vendor may implement, to target any operating system platform. Today, we published the C++ AMP open specification to support using C++ for heterogeneous parallel computing on GPUs and multicore/SSE today, with more to come in the future. Read the full announcement and download the specification at the Native Concurrency blog.

Finally, to make this anniversary celebration complete, we’re shifting gears to pick up speed: After Visual C++ 11 ships, you’ll see us deliver compiler and library features more frequently in shorter out-of-band release cycles than our historical 2- or 3-year timeframe. And, of course, the first and most important target of those more agile releases is to deliver more and more of the incredible value in the new ISO Standard C++11 language. Please check Herb Sutter’s keynote at GoingNative 2012 for further details.

After 20 years, C++ is alive and well, and going stronger and faster than ever, not just at Microsoft but across our industry. use it. love it. And go native!

The Microsoft C++ Compiler Turns 20!

Inside the C++/CX Design

Hello. this is Jim Springfield, an architect on the Visual C++ team.

Today, I want to give some insight into the new language extensions, officially called C++/CX, which was designed to support the new API model in Windows 8. if you attended //BUILD/, watched some of the sessions online, or have been playing with the prerelease of Visual Studio, you probably have seen some of the “new” syntax. For anyone who is familiar with C++/CLI (i.e. the language extensions we provide for targeting the CLR), the syntax shouldn’t seem much different.

Please note, however, that while the C++/CX syntax is very similar to C++/CLI, the underlying implementation is very different, it does not use the CLR or a garbage collector, and it generates completely native code (x64, x64, ARM depending on target).

Early on in the design of our support for Windows 8, we looked at many different ideas including a pure library approach as well as various ways to integrate support in the language. We have a long history of supporting COM in the Visual C++ team. from MFC to ATL to #import to attributed ATL. We also have a good bit of experience at targeting the CLR including the original managed extensions, C++/CLI, and the IJW support for compiling native code to MSIL. our design team consisted of seven people and included people who worked on these and who have lots of experience in libraries, compiler implementation, and language design.

We actually did develop a new C++ template library for Windows 8 called WRL (Windows Runtime Library) that does support targeting Windows 8 without language extensions. WRL is quite good and it can be illuminating to take a look at it and see how all of the low-level details are implemented. it is used internally by many Windows teams, although it does suffer from many of same problems that ATL does in its support of classic COM.

  1. Authoring of components is still very difficult. You have to know a lot of the low-level rules about interfaces.
  2. You need a separate tool (MIDL) to author interfaces/types.
  3. There is no way to automatically map interfaces from low-level to a higher level (modern) form that throws exceptions and has real return values.
  4. There is no unification of authoring and consumption patterns.

With some of the new concepts in the Windows Runtime, these drawbacks become even more difficult than in classic COM/ATL. Interface inheritance isn’t vtable-based like it is in classic COM. Class inheritance is based on a mechanism similar to aggregation but with some differences including support for private and protected interfaces. We quickly realized that although there is a need for a low-level tool like WRL, for the vast majority of uses, it is just too hard to use and we could do a lot better while still preserving performance and providing a lot of control.

The #import feature that was available in VC6 provides a good mechanism for consuming COM objects that have a type library. We thought about providing something similar for the Windows Runtime (which uses a new .winmd file), but while that could provide a good consumption experience, it does nothing for authoring. Given that Windows is moving to a model where many things are asynchronous, authoring of callbacks is very important and there aren’t many consumption scenarios that wouldn’t include at least some authoring. Also, authoring is very important for writing UI applications as each page and user-defined control is a class derived from an existing Runtime class.

The design team spent a lot of time discussing what consumption of Windows Runtime components should look like. We decided early on that we should expose classes and interfaces at a higher level than what the ABI defines. Supporting modern C++ features such as exceptions was deemed to be important as well as mapping the Runtime definition of inheritance (both for interfaces and classes) to C++ in such a way that it was natural. it quickly became clear that we would need some new type categories to represent these as we couldn’t change what the existing C++ ABI meant. We went through a lot of different names and it wasn’t until we decided to use the ^ that we also decided to use ref class to indicate the authoring of a Windows Runtime class.

We also spent a lot of time exploring various approaches to how to hold onto a pointer to a WinRT class or interface. Part of this decision was also how to tell the difference between a low-level version of an interface and the high-level version of the interface. We had a lot of different proposals including just using a *, using * with a modifier, and using various other characters such as the ‘@’ symbol. in the original extensions we did for managed code, we in fact did use a * with a modifier (__gc). We realized we would have many of the same problems if we followed that route. Some of the breakthroughs came when we started thinking about what the type of a pointer dereference would be. this made us realize that what we were doing was similar to what we did when C++/CLI was designed. at one point, someone suggested “Why don’t we just use the ^ symbol?” After the laughter died down, it started making a lot of sense. On design point after design point, we often came to the same design decision we had made for C++/CLI.

Many of the concepts we were trying to express were already present in the C++/CLI syntax. Given that reference counting is a form of garbage collection, using ^ to represent a “refcounted” pointer in ZW fits quite well. Dereferencing of a ^ yields a %, also like C++/CLI. while many concepts are expressed the same way, there are a few areas where we decided to deviate from C++/CLI. For example, in C++/CX, the default interface on a class is specified through an attribute in the interface list while in C++/CLI it is an attribute on the class itself.

In C++/CX we have a much better story than C++/CLI when it comes to interoperating references types with regular types. in C++/CLI, managed objects can move around in memory as the garbage collector runs. this means you can’t get the real address of a member (without pinning) or even embed anything except primitive types (i.e. int) into your class. You also cannot put a ^ into a native class or struct. in C++/CX, objects do not move around in memory and thus all of these restrictions are gone. You can put any type into a ref class and you can put a ^ anywhere. this model is much friendlier to normal C++ types and gives the programmer more flexibility in C++/CX.

We will be providing more insight into our design over the coming months. if there are specific things you would like to know more about, please let us know.

Inside the C++/CX Design

Imagine Cup People’s Choice

The Imagine Cup brings together students from all walks of life from around the world and tasks them with a big challenge: solve the world’s toughest problems through technology.  

I have been involved in the competition for all of its nine years, and each year, I am astounded by the caliber of the projects.  The creators aren’t just innovators, but busy students.  And yet, they manage to develop projects that could make a difference in the lives of many.

As I look at the project list, I can’t help but notice a couple of overwhelming trends that reflect students’ role at the cutting edge of technology.  First, 75% of all software design projects utilize Windows Phone 7 in new ways, such as diagnosing disease and enhancing healthcare.  half of the software design projects use Windows Azure as a way to scale and share data in their solutions.  And, we have ten projects utilizing Kinect.

The Imagine Cup 2011 Worldwide Finals will be held July 8-13 in New York City.  This year, more than 400 international students, made up of 124 student teams from 73 countries, will gather to compete for top honors.  Beginning today, everyone can get involved in the excitement by voting for the Imagine Cup 2011 Worldwide People’s Choice.

The People’s Choice competition allows the public to review many of the projects that will be a part of the worldwide finals and select their favorite.  You can vote once per day.  Pick a project that addresses an issue that is important to you, or show your pride by voting for your national team.  I invite you to watch the videos, and prepare to be impressed.

To learn more about Imagine Cup and vote, please visit the People’s Choice site.  Congratulations to all of this year’s finalists!

Imagine Cup People’s Choice

Roslyn CTP Now Available

In my last few blog posts, I’ve highlighted significant advancements our teams have made as part of the Visual Studio 11 Developer Preview released at //BUILD/, and I’ll continue that series in future posts.  Today, however, I want to highlight some innovative work our teams have been doing that is even more forward looking.

I’m excited to announce that we’ve just released the Microsoft “Roslyn” CTP, which enables the C# and Visual Basic compilers to be used as a service.  while we’ve been busy working on C# 5 and Visual Basic 11, via Roslyn we’ve been working concurrently on a complete rewrite of the C# and Visual Basic compilers.  whereas today’s compilers are implemented in native C++, in Roslyn we’ve rewritten the compilers from the ground up, implementing the C# compiler in C# and the Visual Basic compiler in Visual Basic.  That in and of itself isn’t entirely noteworthy, as it’s long been a tradition for a language compiler to be implemented in its target language, something that’s been true of both our F# and Visual C++ compilers.  What’s quite noteworthy are the scenarios and services this work enables.

Historically, the managed compilers we’ve shipped in Visual Studio have been opaque boxes: you provide source files, and they churn those files into output assemblies.  Developers haven’t been privy to the intermediate knowledge that the compiler itself generates as part of the compilation process, and yet such rich data is incredibly valuable for building the kinds of higher-level services and tools we’ve come to expect in modern day development environments like Visual Studio.

With these compiler rewrites, the Roslyn compilers become services exposed for general consumption, with all of that internal compiler-discovered knowledge made available for developers and their tools to harness.  the stages of the compiler for parsing, for doing semantic analysis, for binding, and for IL emitting are all exposed to developers via rich managed APIs.  as an example, in the following screenshot I’m taking advantage of the Roslyn APIs to parse some code and display the tree of syntax nodes.

The Visual Studio language services for C# and Visual Basic have been rewritten to use these new APIs.  and new tools have been introduced to take advantage of all of these services. for example, the new C# Interactive window enables scripting and exploration in C#:

Roslyn represents an exciting opportunity for developers to build richer tools, such as refactorings and deep visualizations, utilizing the same support that Visual Studio and its compilers would use for their work.  it should be noted, however, that this is an early look at this compilation infrastructure, as the Roslyn work is focused towards a post-Visual Studio 11 release. this CTP will help to illuminate the kinds of exciting end-to-end experiences that are possible with such technology, but at the same time this particular release only supports a subset of each language and is intended for exploration and to enable us to gather feedback from you on the direction.

For more information on Roslyn, to download the CTP, and to let us know what you think, visit

Roslyn CTP Now Available

Visual Studio LightSwitch availability

Today, Microsoft Visual Studio LightSwitch 2011 is available for download for MSDN subscribers. Non-MSDN subscribers can get LightSwitch starting Thursday, July 28th, or download a free trial today.

LightSwitch, the newest member of our Visual Studio product family, is a development tool that enables developers of all skill levels to build line of business applications for the desktop, web, and cloud quickly and easily.  LightSwitch applications can be up and running in minutes with templates and intuitive tools that reduce the complexity building data-driven applications, including tools for UI design and publishing to the desktop or to the cloud with Windows Azure.  LightSwitch enables you to focus on your business needs rather than details when building your application.

I encourage you to try LightSwitch 2011 and see how it can help you build applications.  for details on how to use LightSwitch to build data-driven applications, check out Jason Zander’s blog.

Visual Studio LightSwitch availability

Daniel Moth: Blazing-fast Code Using GPUs and More, with C++ AMP

You have lastly read in this blog about the C++ Accelerated Massive Parallelism (C++ AMP) [1]. Since yesterday, those who couldn’t attend the AMD Fusion Developer Summit have the chance to watch on demand Herb Sutter’s keynote where C++ AMP was introduced and some demos were shown [2].

Now Daniel Moth’s C++ AMP introductory session is also posted in Channel9:

[Watch Daniel Moth’s “Blazing-fast code using GPUs and more, with C++ AMP”]

  1. Introducing C++ Accelerated Massive Parallelism (C++ AMP).
  2. Herb Sutter: Heterogeneous Computing and C++ AMP (AFDS Keynote)

Daniel Moth: Blazing-fast Code Using GPUs and More, with C++ AMP

Advanced STL Lectures, Part 5: the Boost Library

In this 5th part of the advanced series, Stephan T. Lavavej digs into the Boost Library. in his words, it’s an open source, super quality, community-driven Standard Template Library (STL). Stephan will walk you through a sample application from end to end, using Boost.

New to the Standard Template Library? Watch Stephan’s great introductory series on the STL.

Have you missed any previous chapter? Now you can watch the whole series (so far):

Advanced STL Lectures, Part 5: the Boost Library

Enforcing Correct Concurrent Access of Class Data

Hi, this is Jim Springfield. I’m an architect on the Visual C++ team.

In any concurrent application, protecting data from concurrent access is extremely important. there are many primitives that can be used for this, such as critical sections, mutexes, reader-writer locks, etc. there are also some newer high-level approaches to concurrency such as those provided by the Concurrency Runtime, although this isn’t the focus of what I’m showing here. however, there isn’t a good way in C++ to make sure that you are really protecting data correctly when accessing it from multiple threads. you will often see a comment (likely made by the original author) next to a member that reminds you to take some lock when accessing the data. there may be many data items all using the same lock and there may be more than one lock, with some data protected by one lock and some by another.

When it comes time to access some data from a member function, you have to start asking some questions. who is going to call this member? what locks will already be held? could I deadlock here? while I don’t have a solution to all of these, I do have a technique that allows you to be more aggressive with trying things and more comfortable with making changes to existing code, while guaranteeing that you don’t violate the requirement that a particular lock is held.

What I’m going to show is a way to associate a lock with a data member such that whenever that data member is accessed, a check is made that the proper lock is held by the thread. The basis for the technique uses native properties to provide access to data members. With a small set of macros, you can easily retrofit existing code to provide this benefit. I developed this technique years ago and I have used it in several code bases to catch problems with concurrent access.

Here is an example of something you will typically see in code. The developer has written that a critical section should be held when accessing m_rgContextsCache.

  1. // Make sure m_cs is held when accessing m_rgContextsCache
  2. vector<FileConfig> m_rgContextsCache;

Wouldn’t it be great if this information could be specified in code AND enforced? The code below shows how to transform this into just that.

  1. PROTECTED_MEMBER(m_cs, vector<FileConfig>, m_rgContextsCache);

Now, whenever m_rgContextsCache is accessed, a user-defined function will be called if the proper lock is not held. what the macro does is to create the actual data member with a slightly modified name and a property with the name specified. now, all you have to do is run your code and see if any errors occur. there is one “gotcha”. When members are initialized in the constructor or referenced in a destructor, the lock isn’t going to be held. For those cases, you need to directly access the member. a macro that translates a name into the modified “real” name can be used. It can also be used anywhere that it is specifically safe to access the member outside of the lock. The nice thing is that it is now very clear when you are doing this. Here is the code for this.

  1. // The USN macro is used when you need to access a data member in an “unsafe” way.
  2. // this makes sense when you know no other thread is accessing it, such as in a constructor.
  3. #define USN(name) name##_usn_

The PROTECTED_MEMBER macro is defined below. The first line creates the actual member. The second line creates the property and the remaining lines implement the get and put.

  1. #define PROTECTED_MEMBER(cs, type, name)
  2.     type USN(name);
  3.     __declspec(property(get=Get_##name, put=Put_##name)) type name;
  4.     type & Get_##name()
  5.     {_PROTECT(verify_lock(cs));return USN(name);}
  6.     type const & Get_##name() const
  7.     {_PROTECT(verify_lock(cs));return USN(name);}
  8.     type & Put_##name(type const & x)
  9.     {_PROTECT(verify_lock(cs));USN(name) = x;return USN(name);}

There are a couple of things that aren’t defined yet. The verify_lock function will return a boolean indicating whether the lock is held or not. these can be defined for any type of lock you use. there is also the _PROTECT macro. this should be defined to do whatever you want in the case of a failure. this could log, assert, crash, etc.

There are some other variations of the macro to handle some additional cases. One is to handle arrays. It provides a parameterized property which handles the index.

  1. #define PROTECTED_MEMBER_ARRAY(cs, elemtype, name, length)
  2.     typedef elemtype type_##name[length];
  3.     elemtype USN(name)[length];
  4.     __declspec(property(get=Get_##name, put=Put_##name)) elemtype name[length];
  5.     elemtype& Get_##name(size_t i)
  6.     {_PROTECT(verify_lock(cs));return USN(name)[i];}
  7.     type_##name& Get_##name()
  8.     {_PROTECT(verify_lock(cs));return USN(name);}
  9.     const elemtype& Put_##name(size_t i, elemtype const& x)
  10.     {_PROTECT(verify_lock(cs));USN(name)[i] = x;return USN(name)[i];}

To handle a reader-writer lock, a slightly different macro is used. Instead of “verify_lock”, two other functions are used: verify_readlock and verify_writelock. Again, these can be user-defined to handle any type of reader-writer lock. there is one additional wrinkle here, however. there is a function defined called “GetWritable_##name”. The getter returns a const& to the underlying member and verifies that a read lock is held, but this won’t allow you to call methods on it that modify it. to do that, you have to explicitly call GetWritable_##name. this will return a non-const reference and verify the write lock is held.

  1. #define PROTECTED_MEMBER_RW(lock, type, name)
  2.     type USN(name);
  3.     __declspec(property(get=Get_##name, put=Put_##name)) type name;
  4.     const type & Get_##name()
  5.     {_PROTECT(verify_readlock(lock));return USN(name);}
  6.     type & Put_##name(type const& x)
  7.     {_PROTECT(verify_writelock(lock));USN(name) = x;return USN(name);}
  8.     __declspec(property(get=GetWritable_##name)) type Writable_##name;
  9.     type & GetWritable_##name()
  10.     {_PROTECT(verify_writelock(lock));return USN(name);}

There are a couple of other variations to the PROTECTED_MEMBER macro to handle some cases that can occur. If the data member can’t be assigned to (i.e. it is a type without assignment), we need to not provide a “Put” or we will get a compile error. Similarly, we may have a type that can’t be assigned from const data. these cases occur rarely in practice, but they do occur.

  1. #define PROTECTED_MEMBER_NC(cs, type, name)
  2.     type USN(name);
  3.     __declspec(property(get=Get_##name, put=Put_##name)) type name;
  4.     type & Get_##name()
  5.     {_PROTECT(verify_lock(cs));return USN(name);}
  6.     type const & Get_##name() const
  7.     {_PROTECT(verify_lock(cs));return USN(name);}
  8.     template <typename T>
  9.     type & Put_##name(T x)
  10.     {_PROTECT(verify_lock(cs));USN(name) = x;return USN(name);}
  12. #define PROTECTED_MEMBER_GET(cs, type, name)
  13.     type USN(name);
  14.     __declspec(property(get=Get_##name, put=Put_##name)) type name;
  15.     type & Get_##name()
  16.     {_PROTECT(verify_lock(cs));return USN(name);}
  17.     type const & Get_##name() const
  18.     {_PROTECT(verify_lock(cs));return USN(name);}

Finally, here are some examples of verify_lock and verify_unlock that can handle critical sections by pointer or by reference.

  1. inline bool verify_lock(const CRITICAL_SECTION& cs)
  2. {
  3.     return (cs.OwningThread == (HANDLE)(UINT_PTR)GetCurrentThreadId());
  4. }
  5. inline bool verify_unlock(const CRITICAL_SECTION& cs)
  6. {
  7.     return (cs.OwningThread == (HANDLE)(UINT_PTR)0);
  8. }
  10. inline bool verify_lock(const CRITICAL_SECTION* cs)
  11. {
  12.     return (cs->OwningThread == (HANDLE)(UINT_PTR)GetCurrentThreadId());
  13. }
  14. inline bool verify_unlock(const CRITICAL_SECTION* cs)
  15. {
  16.     return (cs->OwningThread == (HANDLE)(UINT_PTR)0);
  17. }

What I typically do is put all of these macros in a header file under an #ifdef _PROTECT guard. If _PROTECT is not defined, then I simply let everything collapse to simple data members. For release builds, the code is just as fast as before.

  1. #ifdef _PROTECT
  2. // all of the code from above
  3. #else
  4. #define USN(name) name
  5. #define PROTECTED_MEMBER(cs, type, name) type name;
  6. #define PROTECTED_MEMBER_NC(cs, type, name) type name;
  7. #define PROTECTED_MEMBER_GET(cs, type, name) type name;
  8. #define PROTECTED_MEMBER_RW(cs, type, name) type name;
  9. #define PROTECTED_MEMBER_ARRAY(cs, elemtype, name, length) elemtype name[length];
  10. #endif

Finally, there is no good way that I’ve found to do this for global or static member data. Usually, it isn’t too much work to wrap global or static data into a class (along with the appropriate lock), which is what I’ve done when I need to.

Enforcing Correct Concurrent Access of Class Data