hosting undocumented

I’ve wanted to write about the full stack for a long time, and thanks to Roy, I now find the time and the will to go down the rabbit hole. If you thought the whole aspx page model is already complex, you’re going to discover that this is only the tip of the iceberg of what is really happening.

The life of an HTTP Request

We’re going to get down the rabbit hole, and look at what happens whenever your browser sends an HTTP request to your beloved .net server. You thought it was simple, boy were you wrong! Please note that I’m going to talk about two cases: IIS5 (Windows 2000 and windows XP) and IIS6 (Windows Server 2003).

Whenever you type a URL, your browser establishes a connection to your web server, and start sending a piece of very simple text, called an HTTP Request. On the listening side, you’ve got a web server (how obvious is that), that is reading this data. The server, in our case IIS, relies on an extensibility model: ISAPI extensions and filters, which are C++ dlls that can be initialized by the server and process the request on their own. As a side note and for general culture, an extension is attached to a file extension while a filter is always in the IIS process and is there for every request. have both a filter and an extension, but we’ll only concentrate on the filter for today.

As some of you probably know, ISAPI filters are used for most of the popular Microsoft IIS centric pieces of software, including plain ol’ ASP, FrontPage extensions, and obviously

You called me?

But IIS itself is no longer only one and one only. IIS5 was there way before .net, and rely on the same mechanism as his parents. Junior, IIS6, is a child of the .net era, and a completely new creation. While they look the same, inside they are completely different.

IIS5 runs in user space. That’s where programs execute and enjoy all the protection and care from their home. That means that IIS5 is more or less the same kind of application as what you and I are used to write. Well. You. If you write in C++. And love unmanaged code.

The IIS5 process is inetinfo.exe, and listens by itself on the correct port to process the incoming calls. Whenever he receives an http request, he’s going to look at all of its isapi extensions to find the one handling it. If none is found, the request is going to return the file resource requested, or one of the numerous error codes you all love. If a filter is found, he’s going to initialize the filter somewhere, depending on the application protection that you defined in your MMC snap-in.

  • Low: the isapi dll is going to be loaded inside the inetinfo.exe. That means that in case of a problem, the whole web server, no request can be processed anymore.
  • Medium: in that case, all the dlls are loaded in an external process, the beloved dllhost.exe. That’s where a lot of different things are put by the COM+ infrastructure for out of process execution, and the one which is a headache because you never know which application in it is provoking this 100% CPU usage. On the other hand, when the process crash or when you kill it, your web server still process incoming requests. Obviously, you still loose an undefined number of processes.
  • High: In this mode, one dllhost.exe is spawned for each isapi dll. Good thing, in case it dies, everything else is preserved. Bad thing, it is very heavy, as a new process is created (and that’s heavy on windows), plus the COM+ infrastructure handling. You pay the price of boundaries. Nothing is free.

Or is it? IIS6, the beloved new born, took over this concept, and gave administrators a fantastic tool. Sometimes, you want several of these isapi filters to be grouped into one out of process dllhost, because if they die, you don’t mind going to only one ceremony. Let me explain that a bit further.

Instead of staying into the living room, on the couch, listening for the doorbell to ring, drinking a beer, yelling in front of tv… bad… childhood… memories… must… forget… IIS6 relies on two components: http.sys and w3wp.exe.

The http.sys, as its name strongly suggest, is a kernel mode driver. Its role is to listen on a tcp port, and fetch the data around as needed. That means that instead of executing into the user space, it is going down in the basement. It’s dark, and as is popular to say, “in kernel mode no one can hear you scream”. If there were a problem down there, chances are that your operating system would die instantly. So why in the name of god did mommy Microsoft put the first of the now revealed twins in such a dangerous place?

It all comes down to drivers, context switches, and other very low level technologies that I would be incapable of understanding fully. Or willing to, for that matter. But to understand, you only need two assertions:

  • Network drivers runs in kernel mode. That’s the first one. Drivers runs into the kernel, as does the tcp stack and disk access. All the pipes runs under the floor, in the basement.
  • In kernel mode, you have a very nifty function. You can ask for data to be copied straight from disk to the network card, without it going through user mode, memory and cpu. That’s what I call low overhead.

I know you look at that, and I can certainly hear the sound of your “aaaaaaaaah aaaaaaaaaah” moment. The http.sys driver can process all the requests very efficiently. But I can also feel you being afraid of the dark. If I execute unknown code down in the kernel, that could be a serious issue right? Absolutely right, and that’s exactly why the http.sys driver doesn’t do it at all.

Whenever http.sys receives a request, it is first going to look if its own separate cache already has the requested information. When this test is positive, it can send it straight down the pipe very efficiently, and that’s the 175% performance increase figures you saw when Windows Server 2003 was released. If it doesn’t find it, it’s going to look at the request, and fetch it to an application pool.

Just like for each vdir in IIS5 you can set an isolation mode to host your isapi.dll, in IIS6 each web site can be part of an application pool. An application pool can contain several web sites, and that’s the beauty of it. It is done by spawning a worker process for each application pool. The name of this new host you’ll often encounter is w3wp.exe, which I believe is an acronym of “we 3nded wildly pissed”. To be fair, a certain category of extremist developers try to expand it into “world wide web worker process”, but as you can see, this is highly improbable.

Each one of these worker processes execute the isapi dll in process, so in a pure IIS6 environment you wouldn’t use dllhost anymore. In case of crash, your web server always continues to execute incoming requests, and only your isapi dlls set on your application pool dies. As http.sys relies on w3wp to process user code, as long as it doesn’t die on itself, the driver has no external risk associated with foreign code execution.

Whenever an request comes in, both IIS5 and IIS6 delegate the work to the ISAPI extension.


For .net, our ISAPI extension is named ASPNET_ISAPI.dll. Whenever an incoming request match the file extension to which the isapi filter is attached, IIS is going to initialize this dll. What is it doing? Here again, IIS5 and IIS6 differ wildly.

In IIS5, the extension is going to check on the presence of the worker process. This is going to be the process hosting the CLR, and executing all your .net code. The worker process is aspnet_wp.exe (once again an acronym: “ASP Naysayers End There. We’re Pissed!”. As always there’s a sense of continuity in Microsoft software naming schemes.).

To go more into details (and we’re much into details, or you wouldn’t be that far in the article), the isapi is going to check if the worker process is present or not. If not it creates it. In both case, it is going to create an asynchronous named pipe, onto which the request is going to be sent, after a handshake which will ensure proper communication and allow for transmitting authentication information.

It is interesting to notice that, because there can only be one worker process, all of your applications are running in one process. That invalidates completely the application protection feature of IIS5, even more if you choose to run it under different credentials.

There’s also a special case. The ISAPI dll actually reads the machine.config file whenever IIS5 is started. If you configured, in the processModel element, the enabled attribute to false, the worker process is not going to be used, and is instead going to be hosted internally in the inetinfo.exe process.

Finally, on the security side, from what I’ve said you could assume that setting the process impersonation in your machine.config file would not work. It does, but instead of executing the worker process under a different credential, IIS sets the token to the correct credential on the thread executing the request, which is then set in the application domain of your web application.

In IIS6, the extension doesn’t spawn anything, and will only give the control back to w3wp.exe, which host the CLR itself.

On both sides, we now have a CLR loaded, and an HTTP request. What next?

Let’s have fun in the HTTP Pipeline

The next step is obviously to be able to link the unmanaged code containing the request (the worker process side) with the managed code turning it into an aspx page, an asmx web service, or in anything else that can be generated from

Whenever a first request comes in for an http application (or virtual directory), the first thing the worker process is doing is to create a new AppDomain. This execution unit that is very similar to a process in the unmanaged world is going to provide the isolation needed by each application to run securely. How is this done? Through COM interop.

If you look at the System.Web.Hosting namespace, you’re going to see a few public types, and many private types. The first two we’re interested in are IISAPIRuntime and IAppDomainFactory. Both types are exposed as COM interfaces. The actual implementations we are interested in are in fact ISAPIRuntime and AppDomainFactory. Let’s see how.

Creating the AppDomain

Let’s assume that no previous AppDomain were created for your web application when the first request goes through. The worker process is going to call the following method on the AppDomainFactory object.

public object Create(string module, string typeName, string appId, string appPath, string strUrlOfAppOrigin, int iZone)

Let’s look at it for a bit. First thing you might see is how all the parameters are actually blittable types. This makes calling from a COM world much easier.

The second thing you see is that it’s given a module (that is, more or less, an assembly file), a typeName, and a few other parameters that are quite self descriptive. Also note that the return value is an object, as this is going to be very important. Let’s look at what this method is doing.

After some sanity checks on the appPath, the method is establishing the properties of the new AppDomain to create, and this is done through an AppDomainSetup object. What exactly is configured? This is the list of properties being set for the new AppDomain:








The appPath formatted (and validated) as a Uri.

Interestingly enough, the cleaning method used, which I used as well in many projects, is to create a new Uri and return the passed argument as Uri.ToString()







The other action taken is to add to an IDictionary object (really a Hashtable) a set of properties that I’m going to deal with later.

The next step is creating an Evidence in which the code is going to execute. What is it setting? Interestingly enough, it is copying the Host evidences and the Assembly evidences from the current AppDomain. In other words, your inherit the main AppDomain Evidence objects.

Another nice thing to know is that whenever this Evidence based list is constructed, the framework is going to look at the Host evidences. If any Zone evidence is defined (see System.Security.Policy.Zone), the Zone for “My Computer” is automatically added to the evidences.

Finally, a new Host evidence is added using the strUrlOfAppOrigin.

The new AppDomain is finally created, with the Evidence and the AppDomainSetup that were just created. Remember the properties I was talking about earlier? They are applied to the AppDomain using the AppDomain.SetData(string name, object data).

As a side note, the way the store used by the SetData is manipulated could be a future entry subject…

Anyway, here is what is defined at the AppDomain store level:













Are we done yet? No. After setting AppDomain properties, creating a suitable Evidence object, adding runtime information, the last piece is to apply a policy. How is this done? At first I started to dig into the details of the exact procedure, but it seems to me just more valuable to see what the permission sets are:

  • A parent UnionCodeGroup containing an AllMembershipCondition and a PolicyStatement with a PermissionSet constructued with PermissionState.None;
  • A first child UnionCodeGroup containing a StrongNameMembershipCondition based on the Microsoft strong name public key, and a PerimissionSet constructed with PermissionState.Unrestricted
  • A second child UnionCodeGroup with an UrlMembershipCondition set to the application url, associated with a PermissionState constructed on both the Url and Zone, from the application strUrl and iZone parameter, but without the permissions of type UrlIdentityPermission and ZoneIdentityPermission.

This might sound a bit obscure to most of us, so here is my attempt at providing a definition of the permissions defined on our AppDomain.

By default, the appdomain have no permission at all (PermissionState.None) for all of the code in it (the AllMembershipCondition).

We then open up this by providing an unrestricted permission set (PermissionState.Unrestricted) to the Microsoft signed dlls (StrongNameMembershipCondition).

Finally, we add the permissions defined for the application, based on its url and it’s zone.

Finally, we have our Evidence object, that is apply through the call to domain.SetAppDomainPolicy(myPolicy). The last step, and the most simple one for that matter, is to create an object from an assembly (the module argument) with a type (the typeName argument).

What an already tedious process. Thankfully, the runtime will only do it once. My guess, as unreliable as it is, is that the worker process keeps a reference to the different objects and associates them to each application, reusing them whenever needed.

Before we move forward, a tiny gem. In the case where the strUrlOfAppOrigin argument is null or zero length, the framework automatically assume this url: http://localhost/ASP_Plus. For those of you who might not remember it, ASP+ was the code name for before the big naming scheme change around beta 2 of .net.

We’ve seen how the AppDomain is constructed and how an object is created from it. The biggest question now is… What object? If you followed the article attentively, you already feel the answer coming. Because the worker process is not handling .net code itself and goes through a COM layer, we already know that the object created is going to be ISAPIRuntime, marshalled to COM using the IISAPIRuntime interface.

Her royal majesty Runtime the first

So here we are, with a fully built ISAPIRuntime object. The declared IISAPIRuntime interface have four methods:

  • StartProcessing is called one time just after the AppDomain creation. I have no idea why it is there to be honest, my best guess being that the managed team decided to remove in later phases initialization code that was done at that point.
  • StopProcessing is called whenever the application stops processing incoming requests, provoking automatically the death of the AppDomain.
  • DoGCCollect is a bit of a surprise, and I’m sure the .net architects would certainly have a very good explanation as to why it is there. The actual action is to call exactly 10 times GC.Collect(). My guess here would be a desperate attempt at the worker process to resume some of its memory under high load, but I can’t say for sure. Anyone with more information?
  • ProcessRequest is the one that is interesting here, as it is really the beginning of the managed HTTP pipeline.

The ProcessRequest method takes an IntPtr argument, named ecb, and an int argument named iWRType. Because we are receiving this method call from unmanaged code, we have to get a bit messy and go through pointers to get the http request itself.

First and foremost, the ProcessRequest is going to create an object of type HttpWorkerRequest. Not so simple in fact, as through a class factory pattern, the actual object can be one of several types (your mileage may vary):

  • The ISAPIWorkerRequest type is the mother of all worker request objects in the ISAPI runtime. The class factory for the types we’re interested in is implemented in the static method CreateWorkerRequest. One of three types of objects can then be created:
  • If the process model is used (that’s our iWRType parameter), that is if it’s different than zero, an ISAPIWorkerRequestOutOfProc object is created;
  • If not and if the IIS Version is more than or equal to six, an ISAPIWorkerRequestInProcForIIS6 object is created.
  • And finally, if not, an ISAPIWorkerRequestInProc object is created.

Before returning, the Initialize method is called on whichever object happens to be created, at which point the data starts to be fetched from the worker process. I won’t go into how it is done, but suffice it to say that yes, some data is loaded in memory before anything can happen, and that’s the http header, encrypted in a tab separated way (yes there are funny stuff happening in the worker process).

Once the fully constructed ISAPIWorkerRequest is created, we finally end up in the “official” HTTP Pipeline, through the call to HttpRuntime.ProcessRequest.

Her majesty Runtime the second

Well, yes, the naming convention is a bit awkward, but it does reflect the reality of two chained runtimes. The first one is responsible for the link between the unmanaged and the managed world, the second one is all good managed implementation.

The wonders with calling a static method like the HttpRuntime.ProcessRequest is that on the first call, a lot of things are going to happen to construct this object. The static constructor for HttpRuntime is going to first call the Initialize method. It is the place where the registry is traversed to find the Path key that you can find yourself in HKLM\Software\Microsoft\ASP.NET\version\Path, and initialize the s_installDirectory private variable.

An instance of HttpRuntime is then going to be created, and the Init method is going to be called on this instance. Here, everything related to the http runtime is initialized, and that’s where the information set through the call to SetData are important… It’s here that all this information is fetched back in our object. It is also at this moment that we initialize the file monitors on the content of the web site, the cache and the profiler.

We’re nearly getting to the point where we reach the part of the http pipeline which is very well documented on the web, so there’s no reason to increase even more the size of this article. But there are still a few interesting points.

What is happening once we call our ProcessRequest method? Intuitively, you would think that your request is actually processed. In fact, it is not. Or should I say, it is not necessarily processed.

Whenever you call the ProcessRequest method, your HttpWorkerRequest object may go into a queue. Depending on how many available threads the worker process have, it may either be queued and you in fact execute the previously queued request, you may execute it immediately, or you may just not be able to process it.

Why is that important? If you dig through the documentation for IHttpHandler class, you’ll notice that another IHttpAsyncHandler class exists. By releasing early the execution thread, you let queue other incoming requests to your application. More than “for higher performance”, it is about “higher throughput”. By queuing more, you don’t block precious thread resources.

A final word

We’ve been through the whole stack from the HTTP request up to the beginning of the official http pipeline. But if you look at many other articles on the subject of hosting,you might notice that they in fact talk about two other classes: ApplicationHost and SimpleWorkerRequest. What exactly is the difference with this scheme?


As the name strongly suggest, ApplicationHost is a class that has been created to help application developers host outside of the IIS environment. It is a separate mechanism that IIS doesn’t use at all. However, by calling the CreateApplicationHost method, you go through the exact same process as the IISAPIRuntime interface, with one strong difference: The code explicitly check for the underlying platform being NT.

As for SimpleWorkerRequest, it is a very simple “data” class that lets you either execute your pages within the isolation mechanism of AppDomains, through the first constructor and the CreateApplicationHost method, or from your own AppDomain through the second constructor of that class.


We’ve gone very deeply in the unofficial relationship with IIS. What do you want to see in the future? How mono works? How things are modified in 2.0? Or uncover some secrets from the “official” http pipeline?



WTF ~ ?

Today, Smon asks why isn’t the ~ operator is not defined for the byte type. It is the exact same problem for the bitshift operators (>> and <<). What’s the official rationale? That it doesn’t make any sense, as the processor is faster doing these operations on 32bit types than it is on 8bit ones. If you prefer, the processor would convert these types to 32bit anyway.

Only way if you want to define these at the language level is to either let the compiler analyze and cast up and down itself, or let the JIT consider that as a special case. My guess is that hiding the slowness of byte operations by slowing down with implicit casting prevent the developer from knowing that what he’s doing has performance implications. A warning at the compiler level could resolve this in a clean way. What do you think? Should we always be honest to the developer?


What's to come

Just to let you know what I'm working on at the moment:

  • An article on IIS undocumented, everything you wanted to know about aspnet_isapi.dll without daring to ask it
  • An article on how to break a protocol encryption, or how reverse engeneering always succeed over security through obscurity
  • An article on a dynamic script engine and the related challenges it poses
  • An article on how to write a simple compiler for a dynamically typed language
  • An article on how to write a MasterPage system that works and is not a hack, and how to provide url rewritting at the same time, the proper way
  • An article on how to provide a Watson like infrastructure for your windows applications
  • Hopefully, get to work with the team on their english web site

I think that's enough to last for a few weeks. The first article is already about 4200 words and i'm not happy about it yet.

By the way, anyone willing to publish one of these articles can mail me :)