Tuesday, December 28, 2010

IIS Interview Questions

1. Use appcmd.exe to recycle the application pool from the command prompt.

2. appcmd.exe is the command line tool for IIS7, you will find this tool at following location :


3. To recycle your application pool use the following command:
appcmd recycle apppool /apppool.name:
Status Code Type of Code
100 Series - Informational
200 Series - Success
300 Series - Redirection
400 Series - Client Error
500 Series - Server Error
IIS metabase is a special databse which is used to maintain the settings and configurations data for IIS. In simple term, it is a configuration base for IIS (Metabase.xml).

IIS 5.0 --> Metabse is in Binary.
IIS 6.0 & 7.5 --> Metabase is in XML.
In IIS, an anonymous user will be given with a user name of "IUSR_MachineName "
Step 1 : In the IIS (inetmgr), right click on the "Computer" icon under "Internet Information Services" . Click "All Tasks" and select "Backup/Restore Configuration".

Step 2 : Click on button "Create backup". Give Name for your backup file. If you want encryption enable encryption option and give UserName and Password and then click OK.
IIS 7.5
This can be change from Virtual Directory properties. First open Properties of Virtual Directory > GoTo ASP.NET Version Tab.

There we can have change the ASP.NET Version.
There are three Execution Permission available.
1. None
2. Scripts Only
3. Scripts and Executable
We can set the Session time out settings from the Virtual Directory for that site.

Right Click on Virtual Directory > Properties > Click on "Configuration" Button
Goto the "Option" Tab. There in Enable Session State Section you can configure the Session Timeout .
When client request for an aspx pages, request comes to kernel level off IIS means to HTTP.SYS . HTTP.SYS receives the request and based on the application pool name [ Which is already registred with the HTTP.SYS ] it send the request to worker process. Windows Activation process works as mediator of them. w3wp.exe loads "aspnet_isapi.dll" files to start the HTTPRuntime . HTTPRuntime creates HTTPApplication objects and all request are passed through HTTPModule and finally reached to HttpHandler . This is the request pipeline. After end of Request pipeline ASP.NET Page lifecycle starts.

For more Information :
We can hosted site on IIS either creating Virtual Directory through IIS manager or Using Folder Web Sharing .
Apart from that Visual studio provide some inbuilt features to host the site on IIS like using Publishing the web site , Using Copy web Tool or Creating Virtual directory during the creating the project by choosing Location as HTTP
Main components for SVCHost.exe are WWW Publishing Service (W3SVC) and Windows Activation Porcess (WAP) .

W3SVC is the mediator of HTTP.SYS and Windows Activation Process. Windows Activation Process maintain the worker processes.
Yes. We can.
While creating Application Application Pool From IIS, there should have two option available first one is for Default Setting and Another is for Existing Setting as template.
We can select the second one and from the drop down listed below we can select any on the Application Pool as Template,.
Yes, We can directly host any site from the physical location of directory itself.

Right Click on Physical Folder > Properties > Web Sharing

There you need to select > "Share This Folder" Option Button. Then it will ask for alias name and other setting. Then Click on OK.

To Validate : Run > Inetmgr > Check there should an virtual directory with the same "Alias" name that you have given.

If there are already one Virtual directory exist it will showing you the error message while you providing the "Alias" name.
There are following reasons where we can use remote debugging
1. Your development server does not have IIS installed.
2. Development server and Build/Released/Hosting Server is different
3. Multiple user want to debug simultaneously.

Before Giving the Definition : you can say like this, Concept of Application pool has from IIS 6.0 .
Application pools are used to separate sets of IIS worker processes that share the same configuration and application boundaries. Application pools used to isolate our web application for better security, reliability, and availability and performance and keep running with out impacting each other . The worker process serves as the process boundary that separates each application pool so that when one worker process or application is having an issue or recycles, other applications or worker processes are not affected.
One Application Pool can have multiple worker process Also.

Main Point to Remember:
1. Isolation of Different Web Application
2. Individual worker process for different web application
3. More reliably web application
4. Better Performance
By default Each Application Pool runs with a Single Worker Process (W3Wp.exe). We can assign multiple Worker Process With a Single Application Pool. An Application Poll with multiple Worker process called Web Gardens. Each Worker Process Should have there own Thread and Own Memory space.

Generally its not recommended to use InProc Session mode while we are using Web Garden.
IIS should periodically monitor the health of a worker process [ Idle or not , Time for recycle or not, All Worker process are running properly or not ] .
Pining means, Activation Process monitor Worker process performance, health, idle time etc.
By default it sets to 30s .
Visual studio having It own ASP.NET Engine which is capable enough to run Asp.net web application from visual studio. So we just click on Run button to start the application.
Now this is the scenarios of local environment. But If we want to host it on server from where all user can access the sites then IIS comes into the picture.

IIS provides a redesigned WWW architecture that can help you achieve better performance, reliability, scalability, and security for our Web sites. IIS can support following Protocol HTTP/HTTPS, FTP, FTPS, SMTP Etc. We need to host the site on IIS, when request comes from client it first hits the IIS Server, then the server passed it to ASP.NET worker process to execute. Then the response also passes to client via IIS itself.
Note only Hosting of Site we can create our FTP Server, SMTP Server using IIS itself.
There are different version of IIS available like 5.1, 6.0, 7.0 etc
WAP is the Controller of Worker process under a Application Pool. Windows Activation Process which is managed by the worker process by starting, stopping and recycling the application pool. When to start, stop and Recycle should be defined on Application Pool Settings. Activation Process is also responsible for Health Monitor of Application Pool during runtime.

FYI : Health monitoring setting can be easily found in Properties of Application Pool.
This is one of the most question in IIS. And along with that interviewer can as what is the different between Web farm and Web Garden ?

When we hosted our web Application on multiple web server under a load balancer call the Web Farm. This is generally used for heavy load web application where there are many user request at a time. So When Web Application is hosted on Different IIS Server over a load balancer, Load balancer is responsible for distribute the load on different server.

Please have a look into this :
Below are the Major Innovation in IIS 7.0
Components are designed as module and there are major change in administration settings.
FYI : You can find out many of them, just go thorugh Microsoft IIS web site.
We can set the Idle time out for an worker process from Application Pool Properties.

In Performance Tab of Application pool, we can set the Idle Time out of the worker process. This means worker process will shut down after that given time period if it stay idle. And will again wake up again if a new request comes.
We can set the default page for a web site from the Virtual Directory Setting.
How To :
IIS Manager > Virtual Directory > Right Click > Properties > GoTo Document Tab.

For IIS 5.1 > aspnet_wp.exe
For IIS 6.0 > w3wp.exe
NOTE: This is objective type question, Please click question title for correct answer.

NOTE: This is objective type question, Please click question title for correct answer.
NOTE: This is objective type question, Please click question title for correct answer.
NOTE: This is objective type question, Please click question title for correct answer.
IIS 7.0 having two types of application pool.

1. DefaultAppPool (Integrated)
2. ClassicAppPool
Just simply Run >inetmgr
Or we can open it from control panel > Administrative tools.
This is one of the more important question for experienced guys.

Please read this in details.
Yes. IIS Can have multiple web sites and Each and every web sites can have multiple virtual Directory.

Note : Here web sites means the Root Node.
This is used automatically register the .NET Framework with your IIS.

For more information :

Yes. This is one of the great features of msvsmon.exe . Each instance of the remote debugger has a unique server name.we can give an instance of the remote debugger any server name. Now multiple user can able to access the server instance.
Well, If there are multiple worker process running in IIS, it means I have to know the name of my application pool. Then I can run cscript iisapi.vbs script to find out the process ID and Application Pool name . Based on the process Id for particular application I have to attache the process from Visual studio IDE.
By running iisapp.vbs script from command Prompt.

Below are the steps :
1. Start > Run > Cmd
2. Go To Windows > System32
3. Run cscript iisapp.vbs
For IIS Remote Debugging msvsmon supported two authentication mode

1. Windows Authentication
2. No-Authentication
Tools is : msvsmon.exe

This is located at : Install path\Microsoft Visual Studio 8\Common7\IDE\Remote Debugger\x86
Application Pool Settings can be save as "XML" Format.

Right Click on Application Pool > All Task > Save Configuration to a File .

This will save all the settings of Application Pool as an XML file.We can make it password protected also.
IIS 7.0 .

Even Vista Home Premium and Ultimate edition is also having IIS 7.0
No. Every Web Application should have one Application Pool. Bydefault it is "DefaultAppPool ".
Below are the list of permission that can be set during virtaul directory creation
1. Read
2. Run Scripts
3. Execute:
4. Write:
5. Browse
Open IIS Configuration Manager
First of all Right Click on Default web sites > New > Virtual Directory .
Browse the Physical Path. Set the properites. Click on OK
We can easily debug any web application that are hosted on IIS by using Attaching of Worker Process.
From Visual Studio IDE > Tools > Attach To Process
Select the particular Process, then start debugging.

For more information Read this article :

For creating web graden we need to go to Application Pool, then Right Click on Application Pool > Properties > Goto Performance Tab

In Web Garden Section, increase the number of worker process. By default it is 1.
Session data store inside process memory of worker process [ w3wp.exe ] .
Anonymous authentication is the default authentication mode for any site that is hosted on IIS, and it runs under the "IUSR_[ServerName]" account.
Below are the commonly used IIS Security settings

1 Anonymous
2 Integrated Windows Authentication
3. Basic Authentication
4. Digest Authentication
5. Passport Authentication

For Set security permission you need to go to Virtul Directory > Right Click > Properties > Directory Security
Click on Edit Button .
HTTP.SYS is the kernel level components of IIS. All client request comes from client hit the HTTP.Sys of Kernel level. HTTP.SYS then makes a queue for each and every request for each and individual application pool based on the request.
Whenever we create any application pool IIS automatically registers the pool with HTTP.SYS to identify the particular during request processing.
IIS having mainly two layer Kernel Mode and User Mode

Below are the subsection of both of them.
1. Kernel Mode
2. User Mode
o Web Admin Service
o Virtual Directory
o Application Pool
ecycling Application pool means recycle the Worker process (w3wp.exe ) and the memory used for the web application.
There are two types of recycling related with Application pool

1. Recycling Worker Process - Predefined Settings
2. Recycling Worker Process - Based on Memory
Default Identity of IIS 6.0 is NetworkServices .
Which is having very minimum rights on your system. The user can only have the read access of the site.
IIS having three different Identity.
1. Local System
2. Local Services
3. NetworkServices
Though we can create new application pool IIS with different settings, but IIS having its own default application pool named : DefaultAppPool
Before answering this question you need to know what are the different IIS version is available in different OS. Below is the list of IIS version with different Operating system.
Windows Server 2008 - Windows Vista - Home Premium/ Ultimate - IIS 7.0
Windows Server 2003 - IIS 6.0
Windows XP Professional - IIS 5.1
Now based on your working experience you can say that you have worked on IIS 5.1 and 6.0 or only IIS 7. Etc.
Now, the next question that can asked after answering this question is “what is the difference between them ? ” – Well I will come with this later.

Tuesday, December 21, 2010

IIS issues and Network Problem Diagnosis by using COMMAND LINE tools


I'll show you (and perhaps refresh your memory) some of the more important and powerful command-line utilities you can use to diagnose network problems on your IIS Web servers and IIS Web farms. Some of these tools' roots go back 20 or more years; some have roots in UNIX, and others are strictly Windows utilities.

You can use numerous command-line tools not only to diagnose problems when they occur but to identify potential bottlenecks before they become significant problems. These tools live in \WINNT\system32 and are consequently "pathed" so you can grab a command prompt and run them from anywhere.

Let's start with the basics. You can use the Ping.exe command-line tool to verify whether a local or remote TCP/IP system is available; you use it when your network resources stop talking.

Consider a scenario in which you receive reports that a Web application is suddenly receiving SQL Server errors. First, you can verify that both the IIS box and SQL Server are up and running. If they are, a simple ping test from the IIS box to the SQL Server box is a quick way to test connectivity. You can ping using DNS name, NetBIOS name, or IP address. And you'd probably do it in this order:
  1. Ping sanio.sys.com
  2. Ping sanio
  3. Ping
A positive response from a ping resolved by DNS, such as in method 1, means the problem isn't a DNS one, and the basic connectivity between the boxes is functioning. A negative response (i.e., a "request timed out" message) means you can try both the NetBIOS and IP address methods to hunt down the problem. Let's say the DNS method failed in this manner:
C:\WINNT\system32>ping sanio.sys.com
Pinging sanio.sys.com [] with 32 bytes of data:

 Request timed out.
 Request timed out.
 Request timed out.
 Reply from Destination host unreachable.
Ping statistics for
 Packets: Sent = 4, Received = 1, Lost = 3 (75% loss),
 Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum =  0ms, Average =  0ms

And let's assume a NetBIOS or IP ping method succeeds in this manner:

Pinging with 32 bytes of data:
 Reply from bytes=32 time<10ms TTL=128
 Reply from bytes=32 time<10ms TTL=128
 Reply from bytes=32 time<10ms TTL=128
 Reply from bytes=32 time<10ms TTL=128
Ping statistics for
     Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

Approximate round trip times in milli-seconds:
 Minimum = 0ms, Maximum =  0ms, Average =  0ms

The problem obviously lies with DNS, and you can fix it accordingly. I turned off my internal Windows 2000 DNS server to illustrate the problem above, and when I turned the DNS service back on, everything worked correctly again.

You can use the ping tool when you test your Internet connectivity to the world outside the firewall or to test Internet connectivity to servers you host from remote locations. We are currently having a devil of a time with our ISP. We host our static site from a computer room in our San Diego office. Ironically, as I write this (in the middle of the night from home), I know that Internet connectivity is down because I just grabbed a browser to take a peek at our site's new look and feel. It failed. I grabbed a command prompt and pinged http://www.interknowlogy.com and it failed miserably. So it's time to call our network guys—they dread hearing from me, and I guarantee they aren't happy with our ISP.

In the last issue of IIS Administrator UPDATE (see the URL at the end of this column), I talked about how powerful command-line utilities can help you diagnose network problems on your IIS Web servers and IIS Web farms. I started with the absolute basic—PING. Ping.exe is a command-line tool you use to verify whether a local or remote TCP/IP system is available. This time, we look at another basic command-line utility you'll want in your arsenal—TraceRT.
Tracert.exe (short for Trace Route) is a route-tracing utility that can effectively measure the number of router hops (or routes) between remote systems, but it's even more effective for measuring hops between your internal systems when you have latency problems. In other words, TraceRT can help identify routing problems between application servers. You can use TraceRT with domain names, IP addresses, or even NetBIOS names.
First try a little test by using TraceRT to trace a route to your favorite Internet site. TraceRT lives in \WINNT\system32 and is consequently "pathed" so you can grab a Command Prompt and run it from anywhere on your system. (I could be overstating the obvious, but you need Internet connectivity to run this test.) When I type tracert interknowlogy.com from my home in Carlsbad, California, I get the following:

1    20 ms    20 ms    20 ms
 2   <10 ms    10 ms    10 ms  bb1-fe2-0.carlsbad1.ca.home.net []
 3    10 ms    20 ms    20 ms  c1-se6-0.sndgca1.home.net []
 4    10 ms    20 ms    20 ms  c1-pos1-0.anhmca1.home.net []
 5    10 ms    30 ms    10 ms  c1-pos1-0.lsanca1.home.net []
 6    10 ms    30 ms    10 ms  c1-pos2-0.snbbca1.home.net []
 7    20 ms    20 ms    20 ms  c2-pos3-0.snjsca1.home.net []
 8    20 ms    30 ms    30 ms
 9    20 ms    20 ms    30 ms  svl-core-03.inet.qwest.net []
10    20 ms    40 ms    30 ms  bur-core-02.inet.qwest.net []
11    30 ms    30 ms    30 ms  bur-core-01.inet.qwest.net []
12    30 ms    20 ms    30 ms  lax-core-01.inet.qwest.net []
13    30 ms    30 ms    60 ms  lax-edge-09.inet.qwest.net [] 

My company's Web site is hosted in our San Diego office—just 2.5 miles from my home. Look carefully at the route. The first seven router hops are on the @home network starting in Carlsbad, moving 30 miles south to San Diego, then turning 100 miles north to Anaheim, another 100 miles north to Los Angeles, 30 miles to Santa Barbara, and 350 miles to San Jose, where I get out to the Qwest network and start "hopping" my way south again all the way back to southern California. And remember, I live 2.5 miles away from my company's Web site location! And we wonder why our connections seem slow!
Let's move to a more practical example that you'll probably use when your users (or developers) complain that a site is slow: using TraceRT to measure router hops between your application servers. Just the other day, a content manager at one of my clients complained that the deployment application was really slow. I knew the LAN wasn't experiencing latency of any significance because I'd been using it. One of the network guys told me that they had updated the router software in the production DMZs over the weekend. From the staging server that hosts the content deployment application, I ran a TraceRT to the production Web server farm that deployed the content and found seven router hops! I printed the TraceRT results, took the printout to the networking folks who did the work, and said, "I think we have a problem." This case was particularly interesting because two router hops were to the same router. It hopped in and out and then back in to the DMZ. The networking folks fixed the problem, and the content manager was back in business.
TraceRT can also help you trace routes between Web servers and database, directory, and Lightweight Directory Access Protocol (LDAP) servers. Do yourself a favor and run some TraceRTs between those servers on your Web farm. Eliminating even one router hop between a Web server and a SQL Server machine could mean a spectacular increase in your site's performance.


In the past two columns (see the URLs at the end of this article), I talked about how command-line utilities can help you diagnose network problems on your IIS Web servers and IIS Web farms. Often, complaints about latency in Web applications are due to network, not software, problems. The more complex your environment, the easier it is to accidentally or inherently introduce network problems.
I started with the absolute basics—PING. Ping.exe is a command-line tool you use to verify whether a local or remote TCP/IP system is available. Then I covered another basic topic—TRACERT. TraceRT (short for "Trace Route") is a route-tracing utility that can effectively measure the number of router hops (or routes) between systems.
In this article, I cover PATHPING. PathPing.exe is a Windows 2000 route-tracing tool that combines Ping and TraceRT features with additional information those tools don't provide. You can use PathPing to identify routers that cause delays and other latency problems on a connection between two IP hosts. By default, PathPing pings each router 100 times, with a single ping every 0.25 seconds. Consequently, a default query requires 25 seconds per router hop.
First, try a little test using PathPing to trace a route to your favorite Internet site. PathPing lives in \WINNT\system32 and is "pathed" so you can grab a Command Prompt and run it from anywhere on your system. (I might be overstating the obvious, but you'll need Internet connectivity to run this test.) When I type: PathPing –n InterKnowlogy.com (the "n" parameter prevents resolving IP addresses to host names), I get the following:
Tracing route to interknowlogy.com [] over a maximum of 30 hops:
 3  . . .
Computing statistics for 375 seconds . . .
When I run PathPing, it first lists the route. This is the same route that the TraceRT command-line tool shows. Next, PathPing displays a busy message for approximately 25 seconds multiplied by the number of router hops. During this time, PathPing gathers information from all the previously listed routers and from the links between them. At the end of this period, it displays the test results like this:
Source to Here   This Node/Link
Hop  RTT    Lost/Sent = Pct  Lost/Sent = Pct  Address
                              0/ 100 =  0%     |
1    5ms     0/ 100 =  0%     0/ 100 =  0%
                              0/ 100 =  0%     |
2    9ms     0/ 100 =  0%     0/ 100 =  0%
                              0/ 100 =  0%     |
And so forth . . .
Trace complete.
In the PathPing output above, in the "This Node/Link Lost/Sent = Pct" and "Address" columns, the link drops 20 percent of the packets.
PathPing has several optional parameters:
  • n—Host names. Doesn't resolve addresses to host names.
  • h —Maximum hops. Maximum number of hops to search for target.
  • g —Router-list. Uses a loose source route along the host-list.
  • p —Period. Number of milliseconds to wait between pings.
  • q —Number of queries per hop.
  • R—RSVP test. Checks to see whether each router in the path supports the Resource Reservation Protocol (RSVP), which lets the host computer reserve a certain amount of bandwidth for a data stream. Use the -R switch to test for Quality of Service (QoS) connectivity.
  • T—Layer 2 tag. Attaches a layer-2 priority tag (for example, for IEEE 802.1p) to the packets and sends it to each network device in the path. This helps identify the network devices that don't have layer-2 priority properly configured. Use the -T switch to test for QoS connectivity.
  • w —Timeout. Waits this many milliseconds for each reply.
As you can see, PathPing is a valuable and powerful tool for your network-problem diagnosis arsenal. It combines the power of Ping and TraceRT and provides even more detail about the routing of IP-based transmissions.

Set Up Remote Desktop Web Connection with Windows XP

Get Your Host Computer Ready

The Remote Desktop feature is only available in Windows XP Professional. It's not included with Windows XP Home Edition. For more information about how Remote Desktop Web Connection works, see About Remote Desktop Web Connection.
The first step in enabling Remote Desktop Web Connection is to install the necessary software on the host computer. Remote Desktop Web Connection is an optional World Wide Web Service component of Internet Information Services (IIS), which is included by default in Windows XP Professional. IIS responds to requests from a Web browser. Have your Windows XP Professional CD handy, and follow these steps:
1.Open Control Panel click Add or Remove Programs, and then click Add/Remove Windows Components.
2.Click Internet Information Services, and then click Details.
3.In the Subcomponents of Internet Information Services list, click World Wide Web Service, and then click Details.
4.In the Subcomponents of World Wide Web Service list, select the Remote Desktop Web Connection check box, and then click OK.
5.In the Windows Components Wizard, click Next. Click Finish when the wizard has completed.
6.Click the Start button and click Run. Type Net Stop w3svc, and click OK. This temporarily stops the World Wide Web service to keep your system safe while you update it with security patches.
Enabling IIS without installing the appropriate security patches can make your system vulnerable to intruders. For more information, read Microsoft Security Bulletin MS01-018 and Security and Privacy for Home Users.
To check for updates:
1.Click Start, point to All Programs, click Microsoft Update, and then click Scan for updates. Follow the prompts to install all critical updates. If prompted, restart your computer.
2.Click Start, and then click Run. Type Net Start w3svc, and click OK. This starts the World Wide Web service.
I highly recommend using Automatic Updates, especially after installing Internet Information Services.

Configure Internet Information Services

By default, IIS is identified on your computer by the TCP port number 80. The steps in this section change the TCP port number and make it much more difficult for a potential attacker to communicate with your computer. The steps in this section are optional, but if you do follow them, you'll dramatically improve the security of your system. If you are already using your computer as a Web server, you should leave the TCP port number at the default setting of 80.
1.Open Control Panel, click Performance and Maintenance, and then click Administrative Tools. Double-click Internet Information Services.
2.In the ISS snap-in, expand your computer name, expand Web Sites, right-click Default Web Site, and then click Properties.
3.On the Web Site tab, change the value for TCP Port. Enter a number between 1000 and 65535 that you can remember easily, such as the month and day of a birthday or anniversary. You'll need to know the TCP Port when you connect to the computer in the future.
4.Click OK, and close the Internet Information Services snap-in.

Configure Remote Desktop

To connect using Remote Desktop, you must have a user account with a password. If you don't yet have a password on your account, create a password by opening Control Panel, and clicking User Accounts. Click your account, click Create a password, and follow the prompts. After you have a password, follow these steps to enable Remote Desktop:
1.Right-click My Computer, and click Properties.
2.On the Remote tab, click the Allow users to connect remotely to this computer check box, as shown in Figure 1.
Figure 1: Enabling remote desktop
Figure 1: Enabling remote desktop
3.Click Select Remote Users, and then click Add.
4.In the Select Users dialog box, type the name of the user and then click OK. Click OK again to return to the System Properties dialog box, and then click OK to close it.

Configure Your Router

If you use a router to connect to the Internet, you probably need to configure it to allow the Remote Desktop connection to your computer. For more information on routers and firewalls, see my Internet Firewalls column. You need to forward two ports to your Windows XP Professional-based computer: TCP port 3389, which Remote Desktop requires, and the port you specified in the TCP Port field in Internet Information Services (or TCP port 80 if you did not change the default). If you use Internet Connection Firewall (and you should!), see How to Manually Open Ports in Internet Connection Firewall in Windows XP for instructions on allowing traffic by TCP port.

Connect to Your Desktop

Computers are identified on the Internet using a unique IP address. To connect to your home computer from the Internet, you'll need to know your home IP address. Visit one of these sites from your home computer to learn your IP address: What Is My IP, What Is My IP.com, or Atlantic PC Solutions. Your IP address may change occasionally, so always check your IP address before you plan to connect. When you're ready to connect to your host computer, follow these steps:
1.Open Internet Explorer, and enter the URL http://ipaddress:port/tsweb/. For example, if your IP address is, and you chose the TCP Port 1374, you would enter the URL
2.If you're prompted to install the Remote Desktop ActiveX control, click Yes.
3.On the Remote Desktop Web Connection page, shown in Figure 2, click Connect. You don't need to fill in the Server field. If you leave the Size field set to Full-screen, the remote desktop will take over your local desktop.
Figure 2: Remote Desktop Web Connection page
Figure 2: Remote Desktop Web Connection page
4.Enter your user name and password at the Windows logon prompt, as shown in Figure 3, and then click OK. You'll see your desktop, complete with any windows that were left open the last time you used the computer.
Figure 3: The Remote Desktop Web Connection logon screen
Figure 3: The Remote Desktop Web Connection logon screen
When you're done, disconnect by closing the browser, or clicking the X at the top of the screen in full-screen mode. Be sure to close all browser windows. Your user name and password aren't stored, so you don't have to worry about someone else accessing your system.
If you're Internet-savvy and plan to connect to your home computer regularly, you can get a domain name to save yourself the trouble of writing down your IP address every time you plan to connect to your computer. You're already familiar with domain names; they're the ".com" names Web sites use to identify themselves. For example, the domain name for this Web site is Microsoft.com. If you have your own domain name, you can enter that into a browser to connect to your home computer, instead of the unfriendly IP address. For information on getting your own domain name and associating it with your home computer, visit the Dynamic DNS Providers List.
If you have Windows XP Professional and an always-on Internet connection, you can securely access your applications and data from work, an Internet café, or any place that has a compatible Web browser. Getting Remote Desktop Web Connection set up takes more than one click, but it's definitely easier than lugging your computer everywhere.

10 Ways to Troubleshoot DNS Resolution Issues

1. Check for network connectivity

Many times, if you open your web browser, go to a URL, and that URL fails to bring up a website, you might erroneously blame DNS. In reality, the issue is much more likely to be caused by your network connectivity. This is especially true if you are using wireless networking on a laptop. With wireless security protocols, the key will be periodically renegotiated or the signal strength will fade, causing a loss of network connectivity. Of course, you can lose network connectivity on any type of network.
In other words, before blaming DNS for your problems, start troubleshooting by checking “OSI Layer 1 – Physical” first and then check your network connectivity. Here you should find a wireless connection with a valid Internet connection.

Figure 1: Good Wireless Network Connection
Notice how the Access is Local and Internet. If it just said “Local” then you do not have a valid network address (you only have a private APIPA that starts with 169.x.x.x).
This brings me to my next point. Make sure that you have a valid IP address on your network. You can check this out by going to View Status on the screen above and then to Details, you can check your IP address and verify your DNS Server IP addresses. Again, if you have a 169.x.x.x IP address you will never get to the Internet. Here is what it looks like:

Figure 2: Verifying your IP address and DNS Server IP addresses

2. Verify your DNS server IP addresses are correct and in order

Once you know that you have network connectivity and a valid IP address, let us move on to digging deeper into DNS by verifying that your DNS Server IP addresses are correct and are in the right order.
If you look at Figure 2 above, you can see the IPv4 DNS Server IP addresses. Notice that these are both on my local LAN / subnet so that I can access them even if my default gateway is down. This is how it works on most enterprise networks. However, your DNS servers do not always have to be on your subnet. In fact, with most ISPs, the DNS Server IPs would not even be on the same subnet as the default gateway.
In most home/SMB router configurations, they do not have their own DNS servers and the SMB router is proxying DNS to the real DNS Servers. In that case, your DNS Server IP address may be the same as your router.
Finally, make sure that your DNS Servers are in the right order. In my case, with the graphic in Figure 2, my local DNS Server is It is configured to forward any names that it cannot resolve to, my local router. That router is proxying DNS to my ISP’s DNS Servers. I can look up those DNS Servers on my router, shown below in Figure 3.

Figure 3: My local DNS Servers, received from my ISP via DHCP
That brings me to two more points. First, make sure that your DNS Servers are in the right order. If you have a local DNS Server, like I do, and you are looking up a local DNS name, you want your PC client to lookup that local DNS name in the local DNS Server FIRST, before the Internet DNS Server. Thus, your local DNS server needs to be first in your DNS settings as these DNS Server IPs are in the order that they will be used.
Secondly, you should be able to ping the IP address of your ISP’s DNS Servers. So, just as my DNS servers are listed above on my router, I can verify that I can ping them even from my local PC:

Figure 4: Pinging my ISP’s DNS Server
Notice how the response time from the ping to my ISP’s DNS Server is horrible. This could cause slow DNS lookups or even failure if it takes too long for the DNS server to respond.

3. Ping the IP address of the host you are trying to get to (if it is known)

A quick way to prove that it is a DNS issue and not a network issue is to ping the IP address of the host that you are trying to get to. If the connection to the DNS name fails but the connection to the IP address succeeds, then you know that your issue has to do with DNS.
I know that if your DNS Server is not functioning then it could be hard to figure out what the IP address is that you want to connect to. Thus, to carry out this test, you would have to have a network diagram or, like many network admins do, just have the IP address of a common host memorized.
If this works, until the DNS server is available again, you could manually put an entry in your hosts file to map the IP to the hostname.

4. Find out what DNS server is being used with nslookup

You can use the nslookup command to find out a ton of information about your DNS resolution. One of the simple things to do is to use it to see what DNS server is providing you an answer and which DNS server is NOT. Here is my nslookup of www.WindowsNetworking.com

Figure 5: nslookup output
Notice, in Figure 5, how my local DNS server failed to respond but my ISP’s DNS server did provide me a “non-authoritative answer”, meaning that it does not host the domain but can provide a response.
You can also use nslookup to compare the responses from different DNS servers by manually telling it which DNS server to use.

5. Check your DNS suffix

If you are looking up a local host on a DNS server that your PC is a member of, you might be connecting to a host and not using the FQDN (fully qualified DNS name) and counting on the DNS suffix to help out. For example, if I were to connect to “server1”, the DNS server could have multiple entries for that DNS name. You should have your network adaptor configured with the connection specific DNS suffix, as shown on the first line on the graphic above, labeled Figure 1. Notice how in that graphic my DNS suffix is wiredbraincoffee.com. Whenever I enter just a DNS name like server1, the DNS suffix will be added on the end of it to make it server1.wiredbraincoffee.com.
You should verify that your DNS suffix is correct.

6. Make sure that your DNS settings are configured to pull the DNS IP from the DHCP server

It is likely that you would want your network adaptor to obtain DNS Server IP addresses from the DHCP Server.  If you look at the graphic below, this adaptor has manually specified DNS Server IP addresses.

Figure 6: Verify DNS Server Settings
You may need to change to “Obtain DNS server address automatically” in order to get a new DNS server IP. To do this, open the Properties tab of your network adaptor and then click on Internet Protocol Version 4 (TCP/IPv4).

7. Release and renew your DHCP Server IP address (and DNS information)

Even if your adaptor is set to pull DNS information from DHCP, It is possible that you have an IP address conflict or old DNS server information. After choosing to obtain the IP and DNS info automatically, I like to release my IP address and renew it.
While you can do this with a Windows Diagnosis in your network configuration, I like to do it in the command prompt. If you have UAC enabled, make sure you run the Windows cmd prompt as administrator then do:
Then, do an IPCONFIG /ALL to see what your new IP and DNS Server info looks like.

8. Check the DNS Server and restart services or reboot if necessary

Of course, if the DNS server is really hung, or down, or incorrectly configured, you are not going to be able to fix that at the client side. You may be able to bypass the down server somehow, but not fix it.
Thus, it is very likely that you, or the admin responsible for the DNS server, need to check the DNS Server status and configuration to resolve your DNS issue.

9. Reboot your small office / home DNS router

As I mentioned above in #2 and showed in Figure 3, on home and small office routers, the DNS server settings are typically handed out via DHCP with the DNS server set to the IP of the router and the router will proxy the DNS to the ISP’s DNS server.
Just as it is possible that your local PC has network info (including DNS server IP Addresses), it is also possible that your router has bad info. To ensure that your router has the latest DNS server information, you may want to do a DHCP release and renew on the router’s WAN interface with the ISP. Or, the easier option may be just to reboot the router to get the latest info.

10. Contact your ISP

We all know how painful it can be to contact an ISP and try to resolve a network issue. Still, if your PC is ultimately getting DNS resolution from your ISP’s DNS servers, you may need to contact the ISP, as a last resort.

6 Tips for Troubleshooting Active Directory

Tip 1: Determining DNS Health
The first thing we want to determine when assessing AD's overall health is DNS. Failing DNS can cause problems such as client authentication, application failure, Exchange failures with e-mail or GAL lookups, LDAP query failures, replication failures ... you get the picture. DNS is critical. There's a very powerful option for DCDiag.exe: C:\DCdiag /Test:DNS /e /v can be redirected to a file. The /e option indicates the test will be run on all DNS servers and /v is for verbose output. In a large environment, this may take a while to run, but it's worth the wait. I always read this starting at the bottom of the report, which is a table like that shown in Table 1. DCdiag runs six different tests: Authentication (Auth), Basic Connectivity (Basc), Forwarders (Forw), Delegation (Del), Dynamic registration enabled (Dyn) and Resource Record registration (RReg). The table also lists the External (Ext) test (connection to the Internet), but this command doesn't perform that test.
Domain: Corp.net







Domain: EMEA.Corp.net







Domain: Americas.Corp.net







Table 1. Enterprise DNS Infrastructure test results.
In the sample output in Table 1, every DNS server -- which is usually also every domain controller (DC) -- in the forest is listed by domain. The cool thing about that is that it shows the domain configuration of the forest, which is very handy if you're a consultant or support engineer and not familiar with the environment. In reading the data in the table, the results are:
  • PASS: The DNS server passed this particular test
  • FAIL: The DNS server failed the test
  • N/A: The test was not run. This is usually due to a previous test failing, so it makes no sense to test a dependant function, which will fail anyway.
In Table 1, we see the value of this test. In a glance, I can see where my trouble spots are. In a multiple-domain forest, you must run this command with Enterprise Admin credentials, or you will get FAIL results on all tests for all DNS servers in domains for which you don't have privileges. This is what happened for the EMEA domain in Table 1.

For further help, a complete, detailed list of the test results is available earlier in the report. For instance, I can go to the top of the report and search for Corp-DC02, and get details as shown in Figure 1.
There's a lot of good information I've cut for brevity, but you can construct the DNS resolver configuration for each DNS server just from the data here. There's a lot of other data here as well. But the point is that this section shows why the forwarders test had an N/A in the summary in Table 1. Using this method, we can pick our way through all the warnings, failures and N/A results in the summary table. And, of course, the beauty is that you have all DNS servers in the forest in one nice text file generated from one command. This can be run even from a client that has DCDiag on it.
Figure 1. Test results for domain controllers:
DC: Test-DC1.Wtec.adapps.com
   Domain: Wtec.adapps.com       
    TEST: Authentication (Auth)
     Authentication test: Successfully completed
    TEST: Basic (Basc)
     Microsoft(R) Windows(R) Server 2003,
     Enterprise Edition (Service Pack level: 2.0) is supported
    NETLOGON service is running
     IP address is static
     IP address:
     DNS servers: () [Valid] () [Valid]

Tip 2: Determining AD Replication Health
The Support tools for Windows 2003 Service Pack 1 (SP1) include a new Repadmin option called /replsum. Similar to the DCdiag /Test:DNS command in Tip No. 1, /replsum collects replication information for every DC in every domain in the forest. It will report the last time replication occurred between the DC the command was run on and each other DC in the forest. While there are a number of different options, I've only used these:
Repadmin /replsum /bysrc /bydest /sort:delta
  • /bysrc indicates to collect data for DCs that have replicated from the DC this command is run on
  • /bydest indicates to collect data for DCs that have replicated to the DC this command is run on
  • /sort:Delta means to show the results in descending order
A sample output is shown in Table 2. This shows six DCs in the domain and the delta since their last replication. Here we can easily see that the domain is healthy except for WTEC-DC2, which has not replicated for five days with an error 1722. I know this DC is down due to a planned move in the data center. In addition, if a DC has not replicated for its tombstone lifetime days, it will be flagged in this report so an administrator can immediately see the danger and take steps to remove it from the network.
SourceLargest DeltaFails/Total%%Error
WTEC-DC205d.13h:39m:15s5 / 5100(1722)The RPC server is...
WTEC-DC141m:26s0 /200
DDMCWIN2K839m:00s0 /40
MRNVMWTEC08m:56s0 / 40
WTEC-DC608m:34s0 / 60
Destination DCLargest DeltaFails/Total%%Error
WTEC-DC105d.13h:39m:39s5 /2520(1722) The RPC server is...
DDMCWIN2K841m:50s0 / 40
GSE-EXCH313m:35s0 / 60
MRNVMWTEC07m:24s0 / 40
WTEC-DC606m:25s0 / 60
Table 2.

Tip 3: Replication Details for All DCs in the Forest
This technique -- very similar to the method used in Tip No. 2 -- will provide more detail. The command is Repadmin /showrepl * /csv >showrepl.csv. This puts the output in .CSV format, as shown in Table 3.
I like this command because it frequently turns up errors in more detail than the Repadmin/replsum command. Additionally, it will often report different errors -- or additional errors, error codes and so on -- and provide the naming context and specific data that /replsum doesn't provide.
Naming ContextSource DCNumber of FailuresLast Failure TimeLast Success TimeLast Failure Status
DC=Wtec,DC=adapps,DC=hp,DC=comWTEC-DC25352/18/2009 21:362/13/2009 7:501722
DC=Wtec,DC=adapps,DC=hp,DC=comGSE-EXCH3002/18/2009 21:370
DC=Wtec,DC=adapps,DC=hp,DC=comWTEC-DC6002/18/2009 21:370
DC=Wtec,DC=adapps,DC=hp,DC=comMRNVMWTEC002/18/2009 21:370
Table 3.

Tip 4: NTDS Diagnostics
This tip is an absolute essential for getting additional data on Directory Service (DS) events. It's enabled per DC in the registry at HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Services\NTDS\Diagnostics. It's fairly straightforward. There are a variety of values that, when enabled, will dump additional events into the event log to assist with troubleshooting. The valid data for these values is an integer from zero to five, inclusive. The default value is zero, meaning minimal verbosity, and a setting of five will dump more than you want. Normally I set it at three and see if I need more. For instance, if I need more verbose details on replication, I'd set the "5 Replication Events" value to three and then reproduce the problem. Make sure to reset the value to zero when troubleshooting is concluded. These settings will fill up the event log quickly.
The most common values I use include:
  • 1 Knowledge Consistency Checker
  • 10 Performance Counters
  • 13 Name Resolution (this is DNS related)
  • 15 Field Engineering
  • 18 Global Catalog
  • 2 Security Events
  • 5 Replication Events
  • 8 Directory Access
  • 9 Internal Processing
The 9 Internal Processing value is handy for getting additional details for DS events that indicate an internal error has occurred. This will often cause additional events that will aid in diagnosing the problem. It's common to set more than one of these values. For instance, in replication troubleshooting, it would be reasonable to enable 1 Knowledge Consistency Checker and 5 Replication Events.
The 15 Field Engineering value will dump several additional events to the DS log. Unlike the other diagnostics, this one needs to be set to five to provide relevant data. Specifically, it will produce events 1644 and 1643, which report inefficient LDAP queries including the client who was the source of the query, the query string and the root of the query. This is important because one of the headaches related to AD is the Local System Authority Subsystem Service (LSASS) process using up enough resources to hang or crash a DC and cause client log-on delays. Inefficient LDAP queries by a user or by an application -- or even a Linux client log-on -- will put a heavier load on LSASS. Enabling this diagnostic will quickly identify the guilty party by name or IP address. Some admins leave this diagnostic permanently enabled to monitor a busy environment, but again, it will fill up the event logs and possibly hide or overwrite other important events in the DS log.

Tip 5: Group Policy Management Console and HTML Reports

I'm sure nearly every AD admin alive uses this tool, but I thought it would be worth mentioning the value of HTML reports. There are two types of reports I use very frequently because I'm dealing with environments I'm not familiar with, and I usually want proof of the settings of a Group Policy Object (GPO) as well as the results from a particular client or clients.

Getting a report of a GPO is valuable even if you're the admin because it shows exactly what settings are defined -- in fact, it shows only the settings that are defined -- so you don't have to wade through the GP editor to find which ones are set. This is a quick way to see if the GPO is defined as you think it is. It also shows links, filters applied and other details. HTML reports for the Default Domain Policy are easy to read and can be expanded and closed by sections as needed, because they're in HTML format. To get this report, just right-click on any GPO in the domain tree and select "Save Report."
One of the problems with solving a GPO-related issue at a client is pestering the user, who may be hundreds of miles away, to log in and get a GPResult. If the user has logged in at least once on a workstation, Group Policy Management Console (GPMC) can provide you with an HTML-formatted GPResult that is produced when the user logs on. This is obtained in the GPMC console by right-clicking the "Group Policy Results" node and selecting the Group Policy Results Wizard. Of course, GPResult is a necessity in diagnosing client-side issues.

Tip 6: Active Directory Performance Diagnosis
While there are many other troubleshooting tips I could have elaborated on here, this is one that probably isn't well known. In troubleshooting server performance, there's a standard set of objects, including processor, LogicalDisk, Server, Memory, System and so on. However, there's an NTDS object that provides us with relevant AD counters such as DRA, Kerberos, LDAP and even NTLM-related counters. In addition, we can collect valuable AD data by monitoring the LSASS process. I recommend enabling the following:
  • Object: Process
    • Counters: %ProcessorTime, Working Set, Working Set Peak
  • Object: NTDS
    • Counters: (all counters)
Unfortunately, there's little information available on what acceptable thresholds are. The only one I've found that even addresses this is Microsoft's Branch Office Deployment guide. While there are many counters may or may not be familiar, I've only found a few that are significant:
  • DRA Pending Replication Synchronizations: These are the directory synchronizations that are queued and are essentially replication backlog. Microsoft only says these values should be "as low as possible" and that "hardware is slowing replication." These could be indications that DC resources are at high utilization.
  • LDAP Client Sessions: This is the number of sessions opened by LDAP clients at the time the data is taken. This is helpful in determining LDAP client activity and if the DC is able to handle the load. Of course, spikes during normal periods of authentication -- such as first thing in the morning -- are not necessarily a problem, but long sustained periods of high values indicate an overworked DC.
  • LDAP Bind Time: This is the time in milliseconds needed to complete the last successful LDAP binding. Documentation says that this should be "as low as possible," but if you run the perfmon output through the Performance Analyzer of Logs (PAL) tool, it will flag 15 milliseconds as a warning threshold and 30 milliseconds as an error threshold. The fix is more resources: processor, memory and so on. (Note: PAL is an excellent performance-analysis tool, and is available online.)
In diagnosing the LSASS process, as in any performance analysis, a baseline must be established. A note on Microsoft's DS blog indicates that if a baseline is not available, use 80 percent. That is, the LSASS counters shouldn't indicate more than 80 percent consumption. Above 80 percent consumption indicates an overload condition, which could be a high LDAP query demand (see Tip No. 4) or general lack of server resources. The resolution is to increase resources or reduce demand, but be advised this has the potential to cause a performance hit in the domain.
If you really want to solve your LSASS resource issues, put your DCs on x64 platforms with several processors and 32GB of RAM. You might be surprised at how much memory LSASS really can use.