No more content for today...

Sadly I've not found conference rooms with power plugs, so no more content for today. I'll probably do a quick update tonight of what was said but it may be fairly short.

Off to see IanG for his WPF talk.

Ads

TechEd: Networking in Windows Vista and Windows Server 2008t

.Why do we need to discuss the new network stack? We want faster applications, we want to connect to anyone else, and we want a simpler, direct model without yet another server. For all these reasons we need the next generation TCP/IP.

There are some issues with Vista network issue. The Multimedia Scheduler service.  Mark Russinovitch published an article about it [EDIT: Here's the link to the article about the interaction with networking and multimedia scheduler]. One hopes this will be fixed by SP1.\

The new tcp/ip stack is a huge project for Microsoft becase it's the first-time microsoft re-write since the early 90's. It was an outdated technology. NG TCP/IP ships with Vista and a slightly different in Server 2008.

The major improvements include receive window auto-tuning, Compound TCP for very fast LANs, ECN - a way for routers to let the client knows when it's overloaded to reroute the data, better support for lossy network - RFC 2582, 2883, 3517, 4138, and some great IPv6 network with Neighbour Unreachability Detection.

There are other benefits too:  much simpler API, security enhancements with API filtering and monitoring, support for stack offload, and multiprocessor scalability, which was not possible before because of NDIS 5.1 limitations.

There are also a few conveniences: there's no restart, there's auto-configuration and self-tuning of IPv4, and policy-based QoS. There's for users roaming support in IPv4 and (better) in IPv6, home network support has been simplified, and there's more efficient multicasting.

So far, it may change in the future, but so far we have full resistance to all TCP/IP level DoS attacks, and IPv6 for more security. You also have the concept of routing compartments.

Let me try and make you a bit enthusiastic about IPv6. We are running out of address. Don't believe me, believe these numbers. ARIN announced on 21th May 2007 that we will run out in 2010. RIPE 55 Meeting took place two weeks ago said we have 2-4 years before running out, and Mr Vint Cert said that on BBC news too.

Theoretically, we could live without ip addresses. We could have several levels of NATs but it blocks us from performing p2p networks. NAT makes peer-to-peer very difficult, because of security. We need to find a balance. Seucirty of IPv4 doesn't exist. IPSec is optional, not so in IPv6. IPv6 provides significantly better QoS than IPv4. Routing tables will shrink with IPv6. Mobility does not work in IPv4. Finally device autoconf doesn't work so well. Some technologies has been backported, but it doesn't work very well.

6 benefits of IPv6: Address depletion solved, ent2end p2p communication restored, mobile and roaming connectivity, international mis-allocation, autoconf.

Terminology, we still talk about Hosts, we have LAN segments, Links, Subnets... In IPv4 a subnet is restricted to a few devices within the same network. Any enterprise network has a wardrobe of devices to solve the subnet. You don't have subnets communicating with other subnets communicating thanks to a Cisco router.

We have 128 bits for our addresses. We usually split it with 64 bits for subnet IDs, and 64 bits for interface ID.

IPv6 is expressed as 8 blocks of hexadecimal 16bits components.

A great thing for IPv6 is that there are classes instead of subnet masks: The CIDR notation.

There are 3 classes of address: Unicast, Multicast, Anycast. There are no more broadcast address. IPv4 wasn't designed for networks with very large amount of machines within the same submask. Now it's gone, you'll get a performance boost. Instad of broadcasting, there's a bunch of neighbor networking protocols.

There's a new unspecified address, and there's a new Loopback: ::1. There's the concept of a well known address. DNS Servers are supposed to be FEC0::FFF::1 FEC0::FFFF::2 and FFEC0::FFFF::3. As a developer, be aware that you should be able to accept v6 addresses.

Configuration can use DHCP, where the address is generated. What they don't need to perform anymore is the address generation. The other thing is that addresses expire. That gives us back some efficeincy and clears the address space.

Addresses expire, and one of the key things in IPv6 is that you have multiple address. You have global addresses, one expired, one in the future, some local addresses have local addresses, or several link local addresses. It's the key change: in IPv4, when designed, all the processing was on routers. In IPv6 we change the balance, nodes have enough power to do some networking process. That lets us have autonomous networks, by moving.

Mobility is the coolest thing, it's so simple and powerful it's beautiful. When a device tries to connect, it can't get any neighbors. Then it creates a local address. If the router contributes, you can change global addresses but your home address will make sure data keeps on flowing.

VIsta has what is known as a Dual-Layer architecture, isntead of a Dual-Stack implementation,w hich meant that the ipv6 TCP and UDP was grouped together and completely separate from ipv4.

Interoperability is probably a key issue for anything being introduced. IPv6 was built-in for interoperability: ISATAP, 6to4, 6over4, Teredo, portProxy. Teredo is very important because it bootstraps the world from IPv6 to IPv4. However it is not without problems.

That brings us to peer-to-peer networking. P2P goal is to enable direct communication between applications without relying on centralized  servers.  You could keep commnciating with everyone by keeping connections opened. You need to prevent DoS, Secure with crypto, resolve changin addresses, maintain dynamically changing group of nodes, and communicate with a subnet.

The p2p in vista is based in IPv6. It's installed by default in Windows Vista. At heart you have low-level and high level APIs.

The concept of a cloud is a group of peers that can communciate with each others. There's a global cloud with everybody, and a link local cloud, or several private global clouds.

A peer is a machine runnnign the application that connects to other. It has a Name and a PNRP ID, and a Certified Peer Address.

[EDIT: This is overall a big presentation about clouds, chrod based networks, graphs, etc. I worked for too many years with gnutella / gpulp / jxta to contiue blogging about the presentation. Battery is nearly dead anyway.]

Technorati Tags: ,

Ads

TechEd: Threat Modeling

As an industry, we're good at looking at code compared to looking at security. You can run all the tools you want, you'll find security bug but you won't uncover any design bugs: they're the hardest to fix.

At the very high level, threat modeling is about identifying threats, mitigate them and move on. A couple of years ago, the threat modeling information we were giving was pretty lousy, you couldn't use it much if you were not a security expert. That's what we've been focusing on over the last few years.

When the Windows Mobile team did their first Threat Model, they had to answer a question: Is a stolen mobile devide still in the threat model. If the bad guy has the device, are you still trying to mitigate? They decided to try and give a fight, and include it.

For Windows Server 2003, we've asked the team to think about an admin browsing on a Domain Controller. If you compromise a DC, you can compromise the whole environment. The question always come to mobile code, and we decided to deactivate all mobile code.

Finally, you need to define who your users are. If your user is an IT guy rather than a mother, they're different people. When you present a mitigation to a user, will they understand it?

You have to model your application. We use Data Flow Diagrams for that. You model data, which is where most of the security issues come from these days.

Most whiteboard architectures are like DFD. [EDIT: Won't reproduce the diagrams, the process is defined on Larry's blog.]

When DFD talk about processes, they're not .exe, they're whatever process data.

You need to define in your DFD trust boundaries. You need to decide if two processes, when they talk together, if they trust each others. In DFD it's explicit.

For Vista we decided to add this functionality to run some processes with lower privileges, like IE runs in protected mode, and there's a trust boundary between IE and the OS. A process boundary may be between the system process and the user, as you don't trust data coming from the user. Finally Kernel transition is also a trust boundary.

Theres several types of DFDs. Context diagrams give a high-level content. In level 0, you dig into more details. I've rarely seen much deeper than Level 1. If you go over, you're probably in analysis paralysis.

One feature that never made into Vista was the Castle project: it was where workgroups meet the domains. In a DC the domain authenticate you. In the home, you have several computers but no domain. If you have 5 computers and no DC, how do you synchronize files, passwords etc across machines. So we came with the idea of a castle, a machine that was responsible for synchronizing the passwords.

Local user ---> Castle service -/-/-> Remote castle

The level 0 DFD contains many services. The shell runs as the user and communicate with the castle service: There's a trust boundary.

The castle service runs as SYSTEM because it has to manipulate the SAM. Config data in the registry is accessed by the Castle service but there's no trust boundary. And finally the machine boundary.

Purists would say that a service should be a complex service and be described in more level, but if it all runs within the same process boundary, having a simple process is good enough.

Each DFD is susceptible to certain kind of threats: STRIDE. [EDIT: again, Larry has the description. Saves me some typing]

Example of repudiation. My brother in law lied when sending a check. He said it was sent, I thought he was lying. I got the proof when the check arrived, because they were all post-marked. You use a disinterested and trusted third party.

Each element in the DFD is subject to attack. How you attack them depends on the element type. You attack data stores differently than you do with processes. How do you protect these elements?

You need to determine threats. You have primary threats and secondary threats.

If you have an external entity (a user for example), you're subject to spoofing. It's subject to repudiation. Data flow and data store can all be tampered. The vast majority of DOS come from processes, rarely against data stores. Finally processes are subject to all of the STRIDE attacks.

Secondary threats come from threat trees. They've been around for a long time. They're a bunch of pre-conditions that can lead to a threat. Unless you're a security expert, you do not know what the pre-conditions are. We made away with threat trees, and work on canonical threats.

For every primary threat, we build the threat tree. At the top of the root you have the prime threat. For example spoofing a user, there are 4 ways of doing it. For the first way, you have several reasons, and several causes which lead you to the leaf nodes in the tree. You can ask very interesting questions about the leaf nodes. [EDIT: Example about user spoofing].

The example of analyzing a bank log-in system. The connection is SSL, the password has to be strong. However the cookie was predictable, as it was i+1 for each new authentication. Suddenly by predicting the cookie you can hijack someone else's session.

We have in the SDL book all the threat trees and all the questions you can ask about the tree nodes.

Every single information disclosure threat can become a privacy issue. If you have an PII you can get a lawsuit. Any data store can be attacked.

Now you need to calculate the risk of every of these threats. We used to use numbers to calculate risks. but they're always wrong. Plus, the more degrees of freedom you have in the calculation, you can easily fudge the numbers so you don't have to fix the bugs. The more numbers, the easiest.

We used to use DREAD. I don't like it. Even David posted a weblog about it.

Calculating risk with heuristics with a simple rules of thumb: we remind you of that with the MSRC bulletin rankings. The problem with numbers is that if the number is under a level, subjectivity will permit you not to fix it, whereas the MSRC you have no way of doing that. You ask is it remote / local, deos it crash the machine, is it a server or a client product. The different levels are all described in the book.

Now you need to mitigate the threats. You have a bunch of options: leave as-is, it's low priority, trying to fix it may delay the product etc; remove the product like we did with castle; remety with technoogy counter measure; warning users, but i don't believe in the prompt, you push the security of the system on the user instead of taking ownership of the problem, its not a good solution if it's your only mitigation.

Each STRIDE threat can be mitigated. Spoofing, you have authentication. The authentication technologies you use are different: Amazon.com uses SSL. They use encryption, authentication and integrity. Tampering, you can have integrity: Digital signatures, hashes, etc. Repudiation threat is mitigated with non-repudiation, strong third parties etc. Information disclosure: You have confidentiality, like ACLs. DOS, you have availability. Elevate of Privilege you have authorization.

[EDIT" Won't take notes on the castle example, so i can enjoy it. Maybe Michael could publish that on his blog as an example!]

Technorati Tags: ,

Ads