---------------------------------------------------------------------------
Section 03
The Basic Web Server
Note - some of this material does pertain to other platforms, but I am
mainly refering to Unix-based Web servers.
---------------------------------------------------------------------------
03-1. What are the big "weak spots" on servers?
The big weak spots are as follows -
- Server running HTTPD as root. This means that anytime a user attaches to
the web server they are running as root. Very powerful if there are any
holes at all. This means that if your browser can find a way in, you can
gain access to anything on the system.
- Improper checking and buffering of user data by CGI scripts. Either a
buffer can be overrun or arbitrary commands can be sent to the server.
- Improper configuration of the server itself or the web server, allowing
for access to files not intended for the general public. This could
include log files, the htpasswd file, and web server configuration files.
But the main problem is a CGI interpreter (perl.exe on an NT web server
leaps to mind) that allows a browser to execute server commands, launch
shells, rename or append files, etc.
---------------------------------------------------------------------------
03-2. What are the critical files?
They are as follows (the names may vary depending on the httpd server
you're running):
httpd.conf Contains all of the info to configure the httpd service.
srm.conf Contains the info as to where scripts and documents reside.
access.conf Defines the service features for all browsers.
.htaccess Limits access on a directory-by-directory basis.
---------------------------------------------------------------------------
03-3. What's the difference between httpd running as a daemon vs. running
under inetd?
Performance. If httpd is running as a standalone daemon, it read its
configuration files once, and responses faster to user requests. Typically
if a site is expecting many users, the server is dedicated. This can be
as simple as starting httpd as follows -
# httpd &
Of course the site will probably have something like this in the /etc/rc0
(or equivalent file) so that httpd starts on bootup -
if [ -x /path/to/httpd ]
/path/to/httpd
fi
Most sites with web servers accessible to the Internet run as a standalone
daemon. The downside is if the web service isn't being used all of the
time then the server is wasting CPU running a web service with no one
accessing it.
Running httpd under inetd starts and stops as user requests come in. The
performance isn't as good -- as the server spawns httpd for each user, the
configuration files are read in for each request. It is usually run by
having a line in /etc/services like this -
http 80/tcp
There is an entry like this in /etc/inetd.conf -
http stream tcp nowait nobody /path/to/httpd httpd
This type of setup is most common on intranets. Very few Internet servers
are set up this way, unless they are simply not very busy or the site is
simply trying to save resources, combining web, ftp, and a few other
services on one box.
---------------------------------------------------------------------------
03-4. How does the server resolve paths?
Typically a server will resolve paths by having a point in the configuration
files that says something like "turn ~ into public_html", which means that
~thegnome will resolve to /server/path/to/documents + public_html. Therefore
if your server's path to docs is /usr/local/etc/httpd/htdocs with a sub
directory under that of public_html with all of the users' directories under
THAT, http://www.fastlane.net/pub/public_html/thegnome becomes
http://www.fastlane.net/~thegnome and accesses the same file.
The problem with resolves is that some sites (depending on software,
revisions, os, patches, etc) will resolve based off of the /etc/passwd
listing of the home directory. This is good for intrusion, bad for security.
As stated earlier in the FAQ, accessing http://lame.target.com/~bin/etc/
can yield interesting results. In practical experience I've seen this
more often on BSD derivatives with Apache than anything else.
---------------------------------------------------------------------------
03-5. What log files are used by the server?
This entirely depends on the server software and how it is configured. It
is usually in a subdirectory called "logs" in a different section of the
tree than the regular web pages. It is usually named "access_log" for
Apache or NCSA, or "access" for Netscape, or some other easily
self-identifying name. This log will contain entries like so:
thegnome.fastlane.net - - [14/Dec/1996:00:13:31 -0600] "GET /nomad/ HTTP/1.
0" 200 293
thegnome.fastlane.net - - [14/Dec/1996:00:13:35 -0600] "GET /nomad/2.html H
TTP/1.0" 200 303
thegnome.fastlane.net - - [14/Dec/1996:00:13:39 -0600] "GET /nomad/3.html H
TTP/1.0" 200 333
thegnome.fastlane.net - - [14/Dec/1996:00:13:43 -0600] "GET /nomad/4.html H
TTP/1.0" 200 359
thegnome.fastlane.net - - [14/Dec/1996:00:13:47 -0600] "GET /nomad/5.html H
TTP/1.0" 200 385
thegnome.fastlane.net - - [14/Dec/1996:00:13:51 -0600] "GET /nomad/6.html H
TTP/1.0" 200 434
thegnome.fastlane.net - - [14/Dec/1996:00:13:55 -0600] "GET /nomad/nomad.ht
ml HTTP/1.0" 200 1988
thegnome.fastlane.net - - [14/Dec/1996:00:14:02 -0600] "GET /nomad/unix/ind
ex.html HTTP/1.0" 200 5066
thegnome.fastlane.net - - [14/Dec/1996:00:14:28 -0600] "GET /nomad/unix/cvn
mount.exploit HTTP/1.0" 200 3117
Obviously if your phf accesses are in there, it could be incriminating. If
you gain access, you might want to eliminate yourself from them.
mv access_log access_tmp
cat access_tmp | grep -v thegnome.fastlane.net > access_log
rm access_tmp
The same with the error log. Called error_log or error, it's entries look
like so:
[Thu Dec 19 22:10:02 1996] access to /usr/local/etc/httpd/htdocs/nomad/faqs
/netware.htm failed for dyn2121a.dialin.rad.net.id, reason: File does not
exist
[Thu Dec 19 22:10:21 1996] access to /usr/local/etc/httpd/htdocs/nomad/faqs
/_free.html_ failed for dyn2121a.dialin.rad.net.id, reason: File does not
exist
[Thu Dec 19 23:29:35 1996] access to /usr/local/etc/httpd/htdocs/nomad/HTTP
failed for niobe.c2.net, reason: File does not exist
[Thu Dec 19 23:48:19 1996] send script output lost connection to client ip1
89.raleigh3.nc.interramp.com
[Thu Dec 19 23:48:25 1996] send script output lost connection to client 38.
30.40.189
[Fri Dec 20 09:19:13 1996] accept: Connection reset by peer
[Fri Dec 20 09:19:13 1996] - socket error: accept failed
[Fri Dec 20 10:35:41 1996] accept: Connection reset by peer
[Fri Dec 20 10:35:41 1996] - socket error: accept failed
[Fri Dec 20 10:39:55 1996] access to /usr/local/etc/httpd/htdocs/nomad/unix
/Xtx86.c failed for 168.126.131.123, reason: File does not exist
---------------------------------------------------------------------------
03-6. How do access restrictions work?
This is going to vary from platform to platform, but I'm going to use
NCSA as an example since it is so common. I'm not going into a lot of
detail, the point is that service can be limited, and to give a flavor of
how easy it is for an admin to set up.
Restricting Access by Host Name
In NCSA this is in access.conf, and you can specify the following:
allow - host names allowed
AllowOveride - determines whether per-directory access overrides
global access restrictions
deny - host names denied
There are more options depending on OS, server software, etc., and you can
get pretty detailed. But most server software allows access restriction by
host names.
Restricting Access by Directory
This is usually accomplished by specifying a tag with the restrictions following, and then
closing with an ending tag of , all within the access.conf
file. For example, let's say the admin want to limit a directory to
company employees only on an NCSA server:
order deny,allow
deny from all
allow from mydomain.org
Include those lines in a .htaccess file in the directory you wish to limit
and bingo, you're limiting access.
---------------------------------------------------------------------------
03-7. How do password restrictions work?
This typically involves the admin performing the following functions:
- Building each user id/password as needed.
- Updating the main configuration files to recognize that
passwords are being used.
- Updating any .htaccess files in individual directories.
The command line syntax for creating a user ID and password (on NCSA) is:
htpasswd [-c] .htpasswdUserName
UserName is the name of the user file you wish to create or edit. The -c
option specifies a new file be created, not the old one edited. If you are
creating a new UserName file, and htpasswd doesn't find a duplicate name,
you will be prompted for the password. If it finds a dupe name, it will
prompt you to type it in twice. These passwords do not correspond to
system passwords, so if you are an idiot wannabe hacker and you just got
into a server with a shell, don't expect to create a root account with
htpasswd and then su to it.
In NSCA you will find the following in the access.conf file indicating
passwords are in use:
AllowOverride None
Options Indexes
AuthName secretPassword
AuthType Basic
AuthUserFile /usr/WWW/security/.htpasswd
AuthGroupFile /usr/WWW/security/NULL
require user UserName
For a directory-level usage, this might be in the .htaccess file:
AuthName secretPassword
AuthType Basic
AuthUserFile /usr/WWW/security/.htpasswd
AuthGroupFile /usr/WWW/security/.group1
require user UserName
Once again I'm not going to go into a lot of detail here. You need to read
the documentation for the server you're attacking (i.e. do your homework)
and THEN start changing or updating files. For example, .htaccess is the
name of the file for NCSA and its derivatives, and .wwwacl for CERN.
One of the good things for intruders is that if an admin is using
per-directory restrictions you can often retrieve these files just like a
regular URL. For example, if the target is restricting access to the
/usr/local/etc/httpd/docs/secure directory using a .htaccess file to
control access, this URL might retrieve it (depending on server software):
http://www.thegnome.com/secure/.htaccess
Besides containing important info, it will give you the location of the
web passwd file.
---------------------------------------------------------------------------
03-8. What is "Web Spoofing"?
Summed up, web spoofing is a man-in-the-middle attack that makes the user
think they have a secured session with one specific web server, when in
fact they have a secured session with an attacker's server. At that point,
the attacker could persuade the user to supply credit card or other
personal info, passwords, etc. You get the idea.
Here's how it works in a nut shell:
- The attacker has compromised XYZ Company's web site, using DNS spoofing,
or some other means such as being listed in a search engine to provide
an intercept to XYZ.
- The user wants to visit XYZ Company's web site and clicks on a link.
- The attacker has built their own SSL 'certificate' (see section 02-9
for info on SSL) and the domain in this certificate looks to the user's
browser as authentic.
- The user gets the solid key and now assumes all is safe and will be
encrypted and secure.
- The attacker's forms on this trojan site may include fields for
passwords, credit cards, bank accounts, etc. and the unknowing user
provides this info to the attacker as they use the forms.
What is the problem here? It is not SSL. It is the certificates. You see,
as long as you have what looks to be the proper info in the certificate,
the user will never know the difference. Sure, the URL might not look
right, but you can use Java to control that.
Of course, only an idiot would redirect a user to a server in their home
or office, you would of course redirect them to a server you have
compromised. And you would use the compromised server's certificate to
get that solid key. That's the trick -- make the key solid, and the user
is fooled.
For more details on this type of attack, check out the following URLs:
http://www.iol.ie/~fod/sslpaper/sslpaper.htm
http://www.cs.princeton.edu/sip
---------------------------------------------------------------------------