There have been a couple of stories over the last week that once again heats up the debate about whether publicly disclosing IT security holes is ethical behavior or not.

For instance, a hacking group called Goatse Securitywas able to gain access to 114,000 email addresses of iPad customers through a security weakness in AT&T's web site. AT&T fixed the problem before the security hole was reported by

AT&T, however, was very unhappy about the breach which caused it (and no doubt some of its customers) major embarrassment. AT&T sent an apology to its iPad owners, called the hacking "malicious" and called for the prosecution of the hackers. The FBI has become involved - strangely in my opinion - and one of the Goatse Security members has been reportedly arrested.

The Goatse Security group have defended their actions in an interview at CNET, saying that what it did was in the public's interest. They said they did not disclose the AT&T web site security hole until after it was closed, the data captured was self-censored before it was released, and the idea for finding the security hole came about because a member of the group owned an iPad and noticed what might be a problem in how AT&T protected its data.  

Also this week are reports of problems with the AT&T's iPhone preordering system. As Gizmodoreports, AT&T's web site had major problems handling the flood of orders, with it freezing and crashing for many customers. Other customers were forced to stand in long lines to have their orders taken manually at AT&T retailers - although some retailers reportedly gave up altogether even trying to do that. 

In addition, a number of customers using AT&T's web site found that they were able to see other customer's data. As described in another Gizmodo report,

"A customer tries to log into their AT&T account to order a new iPhone 4 upgrade. Despite entering their username and password, the AT&T system would take them to another user account. This gives access to all kinds of private information about the mistaken customer: Addresses, phone calls, and bills, along with the rest of private information, becomes exposed ..."

AT&T told Gizmodo that it couldn't replicate the problem.

Meanwhile, an AT&T "insider" supposedly told Gizmodo that the web site problem might be related to "a major fraud update that went wrong."

However, there is also a report at PCWorld that the preorder web site chaos was in fact caused by a lack of testing by AT&T of the pre-order system itself.

Then there was this third story appearing in today's ComputerWorld which says hackers are now exploiting a security flaw in Windows XP that a Google engineer disclosed publicly last week. The Google engineer says he released details about the flaw because Microsoft would not commit to fixing the problem - which the engineer told them about - for at least 60 if not more days.

Microsoft has released a partial workaround to the problem, but has not yet agreed to say when a full fix will be released.

So, in review, a couple of different situations here. First, hackers find a security flaw, but don't tell the company directly about it (a reporter basically does); however, it does publicize the flaw after it has been fixed. A company's poor web site design discloses personal data all by itself, which is reported by users of the web site. And then we have a person who finds a security flaw, reports it to the company, who basically says they'll get around to it eventually. The person then decides to try and force the company to fix the flaw by going public with it. 

Okay, it's all a bit complicated, but did anyone act ethically/unethically here?

Before answering that, let's turn to some IEEE code of ethics for some guidance.

According to Principle 1.04 of the ACM/IEEE-CS Software Engineering Code of Ethics, a software engineer agrees to,

 "Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents," it would seem that a hacker/researcher/engineer has a responsibility to disclose to a company that they have a security flaw in their IT system.

However, the IEEE/ACM Code is a bit more ambiguous about the responsibility of the person discovering the flaw when, after telling the company about the flaw, the company does nothing to fix it.

If we go to Article 1 of the IEEE's Code of Ethics, which says that an IEEE member agrees "to accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment," then it would seem that at some (unspecified) point in time a person discovering a security flaw would be doing the right thing by going public with the information.

So, what is a reasonable time that someone like the Google engineer above who finds a security flaw and reports it, should give a company before going public with it?

Did the Goatse Security group act unethically by disclosing the hole after it had been fixed? Should they have reported the hole first? If they had, and AT&T fixed it, should they have still disclosed what they found?

Did the AT&T customers who saw other customer data and reported it to Gizmodo act unethically in any way? In other words, did they have a duty to report their discovery only to AT&T?

In addition, it would seem that a company that has a poorly designed web site that is open to hacking, or doesn't agree to fix a security flaw in a "reasonable" amount of time would seem to be "ethically challenged" as well.

If the iPhone preorder system was indeed poorly tested, and this was known, did AT&T act unethically in going ahead with it anyway?

Pick your poison, and let me know what you think about the ethics of it all.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less