Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Locking the E-Safe

A variety of cryptographic techniques are being used to minimize threats to electronic transactions

23 min read
Locking the E-Safe

One of the first devices that improved the daily routine of commerce was the cash register, which enforced business rules about when cash drawers could be opened and allowed the shop owner to know how much money should be lying in the drawer at the day's end. This relatively simple device also improved the accuracy and the speed of the check-out procedure while reducing the risk of theft by employees.

Today, merchants, banks, and consumers face much larger risks. The explosion of the Internet has permitted even small merchants to sell goods and services to a worldwide market, yet it has also exposed them to the depredations of a large pool of attackers whose motives range from greed to boredom. If the attacks come from other countries it may not be practical to seek legal recourse. Moreover, as the value of on-line information increases, so does the temptation to engage in insider theft: system administrators, for example, may discover that they can transfer US $10 million to offshore banks and can even charge their employers for airplane tickets to other countries.

Fear of these risks has created a demand for security features built directly into electronic commerce systems. The good news is that existing security mechanisms can be combined to minimize a wide range of threats to electronic commerce.

Security isn't the only problem. European banks will soon have electronic stored-value cards that are as good as cash. A vending machine in the middle of a golf course will be able to accept payment from these cards, without any need for a network connection. Forgetting the password for a stored-value card could be as troublesome as losing a wallet.

Four-square security

The mechanisms used to solve security problems can be divided into four areas--privacy, authentication, integrity, and scalability--though a single mechanism can often mitigate more than one kind of problem. Privacy includes the desire to keep documents and communications secret, as well as to hide the very existence of certain kinds of information and to protect the identities of the parties communicating. Authentication and integrity refer to the need to confirm the identity of users, the authenticity of messages, and the integrity of messages or connections. Scalability mechanisms, likely distribution centers and digital certificates, are crucial to the success of electronic commerce systems, because they help in creating systems that involve millions of users, transactions and documents.

a conventional encryption system

[1] In a conventional encryption system, Alice encrypts a document with a data encryption system (DES) key, and Bob decrypts it with the same key. Sharing secret keys in such systems requires a prior relationship between the parties.

The cornerstone of all privacy mechanisms is encryption. An encryption algorithm transforms a plaintext message into an unreadable ciphertext using a key [Fig. 1]. The correct key can reverse the process, permitting anyone who knows it to get the plaintext message. A strong encryption algorithm will resist even serious attempts to read the message by means other than application of the correct key. The benefit of encryption is that the ciphertext does not have to be kept secret; it could be broadcast over a satellite or published in a newspaper since only someone with the correct key can read the message. If the encryption key and decryption key are identical, the system is termed symmetrical.

Encryption has transformed the problem of keeping lots of messages secret into the problem of keeping a single key secret. A key is relatively small (40 to 2048 bits long) and can usually be used for long periods of time, so it is not extremely hard for systems of moderate size to distribute keys securely.

the public-key cryptography

[2] In the public-key cryptography anything encrypted with a public key can be decrypted only with the corresponding private key, and vice versa.

In the early 1970s, a new class of encryption algorithms--asymmetric, or public key, cryptography--was invented [Fig. 2]. Before then, if two people wanted to communicate securely, they had to agree on a secret key in advance. This was cumbersome when a large group of people needed to communicate, since the number of secret keys grows with the square of the community's size. The important feature of public-key algorithms is that the key used to encrypt a message differs from the one used to decrypt; in fact, if an attacker knows one of the keys, it is still impossible to deduce the other. For example, Alice could publish her encryption key so when Bob wanted to send her a message, he would encrypt it with her public encryption key. Only Alice would be able to decrypt this message since her decryption key is secret. This approach works for large communities because each person has to publish only a single key; thereafter they can all receive private messages from the others.

encrypting the document with a random DES key

[3] Alice encrypts the document with a random DES key [1] and then looks up Bob's public key and uses it to encrypt the DES key [2]. Together, the encrypted document and the key form the digital envelope [3]. Only Bob's private key can open an envelope addressed to him.

Public-key algorithms are slower than symmetric algorithms since the former are usually based on arithmetic, with numbers at least 300 decimal digits long. This limitation was overcome by a security mechanisms called a digital envelope [Fig. 3], an encrypted binary message that has a standard format for specifying recipients, cryptographic keys, and data encoding. The bulk of the envelope is occupied by the message body, encrypted with fast symmetric-encryption algorithms using a message key newly generated to increase the security of the overall system. This message key is then encrypted with the recipient's public encryption key, and the two parts are sent off as a single digital envelope. The recipient uses a private decryption key to extract the message key, which in turn decrypts the body of the message. Digital envelopes work well for store-and-forward messaging environments or even file encryption.

In commercial applications, the thing being protected is often an Internet interactive communication channel between a consumer and a merchant. Such interactive sessions can be protected by digital envelopes or by a session-key mechanism like the Diffie-Hellman (DH) Key Agreement protocol. The DH protocol is an exchange of messages that allows two parties to agree on a shared secret key, which can then be used to protect the privacy and integrity of all subsequent traffic. A typical shared secret key produced by the DH protocol has a length of 128 bytes, and hash functions are used to cut that down to, say, an 8-byte Data Encryption Standard (DES) cipher key.

This DH protocol might be used to set up encryption keys for a Web shopping session. The buyer picks a secret random value, BuyerSecret, and sends the seller a DH function of BuyerSecret. The seller, too, picks a random value, SellerSecret, and sends a DH function of it to the buyer. Now that the buyer and seller each know their own secret and a DH function of the other party's secret, they can compute another DH function of these values. Both parties will end up computing the same value, although they are starting with different inputs.

The security of DH is based on the fact that it is easy to raise numbers to a power (for instance, 2 raised to the third power is 8), but hard to invert that operation (for example, computing the log to the base 2 of 8 is 3). Computing logarithms is not excessively hard for ordinary integers, but becomes very hard indeed when the numbers are large and clock (modulo) arithmetic is used with primes. For example, 2 raised to the 3rd modulo 5 is 3, not 8.

A shared secret can be set up with a total stranger; even if an attacker records the setup messages, the secret cannot be deduced. After a DH setup, the user has no idea whom he or she is talking to, but does know that no one else is listening. This creates an important security feature: perfect forward secrecy, which means that even if the attacker were to find out everyone's private keys (perhaps as a result of a court order), it would be impossible to decrypt any of the messages transmitted under the DH key. The latter depends on fresh random values that each person picks, and if they are forgotten, a recording of the messages cannot be decrypted. The encryption key for a DH-protected session does not depend on any long-term secret key the attacker can discover.

If a DH system requires authentication, digital certificates can be exchanged after the key has been set up, and a challenge-response mechanism can confirm that each party has current access to the matching private key. Without resorting to traffic-tracing mechanisms, an attacker would not be able to decrypt the authentication exchange and would therefore not be able to tell who was talking to whom. This feature is important when a business does not want its competitors to find out with whom it is doing business.

Authentication

The most basic form of authentication is validating the identity of system users. Traditionally, passwords have been used to do so, but they become a weak link when an attacker can monitor connections between users and the target system. Many news stories show how easy it is for attackers to grab passwords from a network.

One mechanism that offers substantially better user authentication is a credit-card­sized authentication device--a token or smartcard--that can store a secret key and perform a cryptographic challenge-response. To access the system, the user must have the authentication token as well as a password.

In their strongest form, authentication devices require the user to activate the token by entering their passwords on its keypad . When the user logs in over the Internet, the company's firewall sends back a random numeric challenge that must also be entered on the token's keypad. The token encrypts the challenge under its secret key and displays this result to the user. The user sends the firewall this result. Since the firewall also knows the token's secret key, it can check the response to ensure that the user has the correct token. In the end, the firewall has high confidence that it is communicating with an authentic user. Additional steps can be performed to authenticate the firewall to the user and thus ensure that the user's network connection has not been incorrectly routed to an attacker's site.

A substantial simplification of the full challenge-response mechanism involves basing the challenge on clocks kept synchronized between the user and the system, eliminating the need for the user to type in the challenge. Instead, the device simply displays new values every minute. When users connect to a Web page that requires extra authentication, they type in the number from the card instead of a password. The number can be recorded, but unlike a password, it cannot be reused: the authentication server will allow a particular number to be used only once during a given minute, and after that, it is no longer valid. Time-based authentication devices are very popular in the marketplace, which points up the importance of creating security systems that provide good security without undue burden. Hard-to-use systems are just not used.

The challenge-response technique can be undertaken entirely in computer software. The secret key shared by the user and the system can be stored on the user's computer with some kind of password-based encryption, and the challenge-response actions can be performed without any user intervention. The drawback--somewhat diminished by the popularity of laptops--is users must have their computers with them to access the system. Moreover, software secrets can be surreptitiously copied by local attackers, who could then, in the safety of their own homes, spend hours trying to find the password by having a program put forward every word in the dictionary, along with such common variations as substituting the digit 1 for the letter I. Authentication tokens cannot be copied as easily as disk files, so they resist this threat.

Integrity

Once users have been authenticated, the next problem is authenticating the individual messages. An electronic funds-transfer system has to know that its instructions come from the expected source and have not been modified by an attacker. The core mechanism for achieving this kind of authentication is called a one-way digest.

The functions are called "digests" because they take as input an arbitrarily long message and produce a summary--or digest--of it that is fixed in size. One example of a digest is a checksum by parity check. Cryptographic digests are one-way in the sense that it is easy to compute them, but computationally infeasible to find a message that has a given digest; that is, computing the function is easier than computing its inverse. Ordinary checksums do not necessarily have this property--for example, it is easy to modify the digits of a bank account number without changing the check digit. This one-way property means that the digest will detect message-tampering with very high confidence. An attacker cannot modify a message without changing the digest.

To sign a document with a digital signature

[4] To sign a document with a digital signature, Alice passes her document through a hashing algorithm so as to produce the message digest, then encrypts it with her private key, forming a digital signature. Next, she transmits the signed document to Bob. After receiving her transmission, Bob employs the same hashing algorithm to create another message digest and also decrypts the signature using Alice's public key. If the two digests match, then the signature is valid.

One-way digests and public keys can be combined to create digital signatures attached to electronic mail, purchase orders and other business documents [Fig. 4]. Alice can sign a message by computing the digest and then encrypting it with her private key. Bob verifies that the message came from her by decrypting the digest using Alice's public key and comparing that value with the digest he computed for the message. If an attacker modified the message, the digests will not match. If an attacker tries to modify the message and its encrypted digest, that, too, will fail because the attacker does not know Alice's private key. Notice that in this case, the message is not encrypted; the attacker can read it. But anyone who knows Alice's public key can detect tampering, so the attacker cannot engage in it.

Privacy and authentication mechanisms can be combined to create a signed digital envelope that ensures both privacy and integrity. In this case, the message has three parts: the encrypted plaintext body, the plaintext encrypting key encrypted under the recipient's public key, and the plaintext digest encrypted with the sender's private key. The first two are used as before to allow the proper recipient to read the plaintext; the digest in the third part is used to check for tampering. The result is a signed and sealed digital envelope whose contents could be a short e-mail note or the detailed design specifications of an automobile engine.

One-way digests can also be combined with shared secrets to create a high-performance integrity-checking mechanism called a message authenticity check (MAC). Integrity checking is intended to make sure that if an attacker tries to tamper with messages, or even with a single character of a single message, the mechanism will detect the modification and reject the message. For example, the system may require an integrity check on each character transmitted over a network connection. If so, the performance overhead of a public-key operation may be unacceptable.

An alternative is to set up a shared integrity-checking key that becomes one of the inputs to the one-way digest. Like symmetric encryption keys, this integrity key must be kept secret, since attackers can tamper with a message if they know the integrity key. The integrity of a single byte can be ensured by sending the string digest, which consists of the single byte being sent appended to the integrity key and a byte counter. Recipients know the key, the byte counter, and the received byte, so they can check the digest to detect modified or missing bytes. Attackers do not know the integrity key and thus cannot compute a digest that will pass inspection. For instance, to ensure that an order to sell 5000 shares cannot be changed to an order to sell 9000 shares, this mechanism could guarantee the integrity of a real-time connection between a stock broker and the trading floor of an exchange.

Scalability

The basic cryptographic primitives needed for electronic commerce were created by researchers in the 1970s and '80s and have by now moved into mainstream communications engineering. Newer primitives and constructs deal with problems of scale--for instance, how to provide digital certificates for everyone who has a credit card--and are being forced into mainstream engineering by the exponential growth of the Internet. Some of the newer problems do not have solutions, and some apparent solutions have turned out to be wrong. Still, certain mechanisms do appear to have enduring value.

Dealing with millions of users is the first scaling problem. A traditional symmetric key system requires every pair of users to have a unique key. Thousands of users would require a million or millions of keys, millions of users an unmanageable number of them. This problem can be handled by using either asymmetric public-key cryptography or key distribution centers (KDCs) that facilitate high-performance symmetric cryptography. A KDC is a system that a user trusts with their secrets. In particular, each user shares a secret key with the KDC and takes advantage of that key to generate the key employed to communicate with all other users.

For example, if Alice wants to send a message to Bob, she firsts sends the KDC a message requesting a key for talking with him. That request and the response from the KDC are protected by the secret key (actually an integrity-checking key and a message-encrypting key) that Alice shares with the KDC. The response from the KDC includes a packet of bits that Alice will pass on to Bob with the message. The packet contains a copy of the message key encrypted under the secret key Bob shares with the KDC. When Bob gets the message, he decrypts the packet of bits to find the message key. To reduce the overhead of communicating with the KDC, Alice and Bob can decide to reuse the message key for a sequence of messages or for a predetermined time. For example, a connection between a stock broker and a trading floor could be set up in the morning and the same keys used for all buying and selling messages sent during that day.

Many of the benefits of KDCs are also drawbacks. The KDC generates all message and session keys, so they are all uniformly good and uniformly protected. There is no need to worry about a laptop that generates very predictable message keys. Of course, if the KDC is physically or logically compromised, so is the whole system. The KDC can provide key escrow (allowing authorized entities to invade the privacy of the users) by recording all message keys or by producing them in a reproducible way. Some environments require key escrow.

A variation on the KDC system is to hand out access-control tickets instead of session keys. The packet of bits Alice gets from the KDC could be a message encrypted for a funds-transfer server, stating that she may transfer up to $1 million during a particular eight-hour period. Alice would include these bits whenever she needed to use the funds-transfer system, which could decrypt the packet and establish that Alice was complying with the business rules.

In small-scale systems, keys are rarely compromised and are never changed. However, in global electronic commerce systems, these are common events. Mechanisms for handling key changes for both symmetric and asymmetric cryptography are similar. The main idea is to attach names and attributes to keys. These names could be short printable strings or they could include such attributes as the range of dates for which the key is valid. Key names are then added to all messages, and recipients maintain a key chain that can be indexed by the key name to find the correct key.

To reduce the size of key chains, keys can be divided between two classes: key-encrypting keys (KEK), and data-encrypting keys (DEK). Only the KEKs are given names and thus put on key chains. To create an encrypted message, the sender creates a fresh DEK (which does not have a name), encrypts the data with it, and then encrypts the DEK under a named KEK. The final message includes the KEK's name, the encrypted DEK, and the encrypted data. This refinement dramatically reduces the total number of bytes encrypted with named keys, reducing the risk that an attacker will get enough information to break the cipher.

Tamper proofing

With asymmetric cryptography, public keys can exploit a similar scheme. To check and trust a digital signature, the recipient must have a high-integrity source for the sender's public key. How is this possible? What prevents an attacker from tampering with the trusted database of public keys?

The solution involves creating a digital certificate, a signed message stating that someone's public key is a 1024-bit number and that this key is valid for a specified time. Digital certificates are created by trusted third-party organizations called certificate authorities (CAs), which have good physical security. In small systems, every recipient has a built-in tamper-resistant copy of the CA's public key, so they can all verify signatures on digital certificates. In larger ones, the recipient might need to look up the CA's public key by finding the CA's digital certificate, which is of course signed by a higher-level CA.

This process is repeated until it reaches one of the public keys trusted by the recipient. These trusted keys are called "root keys" because they form the root of the tree of certificates, each signing the certificate immediately beneath it in the tree. Root keys must be built into the recipient software. For example, a company called VeriSign Inc., which runs a CA for commercial Web servers, will issue digital certificates only to companies that give it certain notarized business documents. Web browsers have the root public key for VeriSign compiled into their software, so they can validate a commercial Web server certificate.

One major benefit of asymmetric cryptography is that the root public key does not have to be secret, just tamper proof. In fact, several root public keys appear regularly in The New York Times. People can confirm that their copy of the key has not been altered by comparing it with the published version.

In the early days of public-key cryptography, some users tried to create a single root key and a single hierarchy of certificates that would identify all the people in the world. This approach failed because it assumed that digital certificates were proofs of identity. It makes more sense to regard the certificate as a contract that identifies two or more of its signers and specifies some of its parameters.

For example, the digital certificate someone might use to buy goods with a bank's credit card includes information about the card, the bank, the user, the account, the certificate's expiration date, and perhaps a separate date for the expiration of the account. It might even include information on spending limits to distinguish high-dollar corporate purchasing cards from retail credit cards. This model implies that different certificate hierarchies support various classes of contracts. The proof-of-identity certificate is a special "is-a-citizen" contract.

Root keys for certificate hierarchies bring up another problem of scale. The monetary value of the private key that signs all the digital certificates used by credit card companies is enormous. Even if no fraud were ever committed with the compromised key the cost of changing the root key would be huge. New mechanisms are needed to handle key management in cases when potential losses scale up from thousands to billions of dollars.

VeriSign's root private key is already protected by a mechanism called secret sharing. The idea is to take an important secret and break it into parts that can be given to different trustees, so that the original secret can only be reconstructed if enough trustees provide their parts of the secret.

For example, one commercial root private key might be split into five parts, with at least three required to reconstruct the private key. Since the private key is needed only when a new certificate authority is created, the overhead of gathering physical trustees is acceptable. All-electronic forms of this scheme do not require people to fly to the same location. For instances, a work-flow automation product can use such a scheme to enforce a dual-signature business rule on large purchase orders: the purchase order could be signed with a private key that would have to be created from any two of the keys owned by people authorized to sign large orders.

Secret splitting is based on the idea of fitting curves to points on a plane (for instance, drawing a line that passes through two points, or a parabola that passes through three). Suppose you want to split a secret into five parts and to require that anyone have at least two to reconstruct it. Any two points uniquely determine the slope of a line, but one point tells you nothing. If the secret can be represented by the slope of a line, the five shares of a secret can be created by picking any five points along any line with the correct slope. Any two of these points can be used to reconstruct the secret, but one point does not help an attacker.

To get such higher thresholds as three out of five, this scheme can be extended to higher-order curves, like quadratics and cubics. Another extension that helps with a large secret involves breaking it into small pieces of perhaps 1 byte a piece, and using the secret splitting algorithm with each of them. If knowledge of one piece does not help deduce the values of the others this extension is as secure as the original mechanism. Root private keys have this property.

Electronic commerce systems are secured by combinations of the basic mechanisms. Engineering tradeoffs achieve a good balance among improved productivity, easy of use, performance, and risk management. Cryptographic security mechanisms are added to improve the latter, usually at the expense of the other desiderata.

In any case, cryptographic security mechanisms can cut costs and improve productivity by eliminating the need for face-to-face contact. Bank teller machines are a prime example. Each of them uses a secret key, stored in hardware, that is shared with its parent bank's key-distribution center. That center acts as a translator to connect the machine to other centers with which it shares a different secret key. This chain of pair-wise secrets eventually leads to the user's own bank, which approves the transaction. The system uses a hierarchy of shared secret keys to build up an international money-transfer operation.

Assuring payment

Payment systems are among the most active engineering areas for electronic commerce. If merchants can be assured of receiving payment, they can accept orders from the growing market of Internet users. Several systems are in use today, and new ones will be deployed in coming years. A San Diego, Calif., company called First Virtual (FV) has the simplest deployed system, which uses ordinary insecure e-mail as a communication mechanism and relies on legal contracts to provide all security.

Consumers and merchants set up accounts with FV. When the consumer orders goods, the merchant sends FV e-mail with the transaction details and purchase amount. FV then sends e-mail to the customer asking for confirmation. If the answer is yes, FV tells the merchant that FV will pay for the goods after 90 days. FV then places a charge against the customer's credit card. The 90-day wait is the system's hook into the existing legal system for credit card fraud. If the merchant or consumer notice a problem, they have 90 days to send FV a report. Should the waiting period go by without any reports, no one has the right to dispute the transaction, so FV can pay the merchant. The amazing thing about the FV mechanism is that the demand for electronic payment is so high that merchants and consumers are willing to accept its limitations.

The SET protocol

Visa and MasterCard are preparing to deploy a more secure system for credit card transactions. Their Secure Electronic Transactions (SET) protocol uses public-key cryptography to provide authentication, privacy, and integrity. Several companies are building software systems that will incorporate this protocol, which uses digital certificates for authentication and for creating signed and unsigned digital envelopes. The certificates are issued by consumer banks to the card holders and by credit card processing centers to the merchants.

All the certificates chain back to a global root public key shared by MasterCard, Visa, and other payment-card issuers--that is built into browsers, electronic wallets, merchant servers, and other commercial software packages. The problem of looking up certificates is solved by requiring SET messages to contain all the certificates needed for their authentication unless the sender knows that the recipient already has appropriate certificates.

SET has highly specific privacy goals. It does not attempt to secure such shopping information as the list of goods being purchased, for other protocols, like the Secure Sockets Layer, can protect that. Rather, SET has been designed carefully to prevent the merchant from finding out the cardholder's credit card number. Today, merchants always learn the card number, and this is a major source of fraud. Cardholders who use SET put their card numbers in an envelope that can be opened only by the card-processing center, not the merchant. The merchant trusts the processing center to check the card's validity.

SET was designed to conform to U.S. export control laws, so SET-compliant software can be produced in the United States and sold worldwide. (RSA Data Security has actually been granted the right to sell its SET software, S/PAY, globally.) However, the SET protocol's business model does not meet the needs of all countries. For example, in Japan most electronic payments involve direct transfers between bank accounts, not credit cards: buyers tell their banks to transfer payments directly into sellers' bank accounts, avoiding the 3 percent service charge levied by most credit card associations. In Japan, consumer bank accounts are often tied to an automated system of loans linked to the quarterly and yearly bonuses that make up the majority of the income of most Japanese consumers. The current SET protocol does not deal with the banking regulations that govern these loan accounts, but work has started on a SET-like protocol that would meet Japan's needs.

Stored-value cards offer another approach. The idea is to trust a computer about the size of a credit card to store and track your money (see "In Your Pocket: Smartcards"). Funds can be added to the electronic wallet by the consumer and removed by the merchant. The security of stored-value cards is based on keys stored in the hardware and used to convince merchants that card are legitimate and backed by enough money. The card must also recognize legitimate banks so that it cannot be tricked into adding funds that are not truly available.

The primary benefit of stored-value cards is that they eliminate the need for on-line communication with banks--a highly desirable feature for developing countries that do not have extensive phone systems. Such countries can directly convert the basis of their economies from cash to stored-value cards, without passing through the paper check and plastic credit card stages. A similar technological leap has already happened in telecommunications: countries that had only a few copper-wire telephones installed very quickly acquired many of their satellite and cellular counterparts.

Another approach that avoids on-line communication with banks has cropped up for micropayments, transactions for such low-value goods as a piece of candy or a copy of a newspaper. The low profit on these small transactions does not permit high processing or communication costs. Though differing quite a bit on the details of initial setup, user authentication, and grouping of transactions, most micropayment schemes are based on the one-way properties of cryptographic digest functions. The basic idea is to use a cryptographic function to create a roll of numbered coins. You spend a coin by telling the merchant the coin number, and numbers can be chosen to prevent anyone but a legitimate merchant from presenting them to the bank for payment.

In one simple micropayment protocol, the number on the first coin is chosen at random. The number on each successive coin is the digest of the number on the preceding one. With a knowledge of a single coin number, any person can compute the numbers on all subsequent coins in the roll, but no one else can know the number of the preceding coin, since the digest is a one-way function. A merchant can prove to a bank that a consumer has spent 10 coins by presenting the first coin in the 10-coin sequence along with a digitally signed note from the consumer identifying the last coin in the sequence, something only the consumer could have known. The performance benefit is that only the last coin requires a digital signature operation; all the others can be checked with the fast digest functions.

Not payments alone

Of course, electronic commerce is more than just payment systems: it also deals with entering and tracking orders, the sharing of design specifications, and the creation of contracts. The problem is how to share information securely and how to give outsiders controlled access to internal corporate information systems. Two protocols address this.

The Electronic Data Interchange (EDI) standard is a collection of agreements for representing business information ranging from electronic circuit designs to purchase-order forms. The overhead of setting up an EDI system for a particular application is very large, so this approach is mostly restricted to high-volume transactions between long-term business partners, such as automobile makers and their suppliers.

From a security perspective, an EDI document is like an e-mail message that must be transferred from one company to another with very high reliability, integrity, authentication, and privacy. In the past two years several e-mail vendors have adopted an e-mail security standard called Secure Multipurpose Internet Mail Extensions (S/MIME) which addresses most of the problems involved in sending EDI documents over the Internet. S/MIME uses digital envelopes and signatures to ensure the authenticity, privacy, and integrity of e-mail, and companies are now offering EDI systems that transmit documents using S/MIME. To meet EDI's stringent need for reliable delivery, these EDI products add return receipts and other end-to-end security features.

The leading choice for low-volume business information exchanges are the World Wide Web standards: the Hyper Text Transfer Protocol (HTTP) and the Hyper Text Markup Language (HTML). Together, they provide a way of creating cross-platform graphical business forms (via HTML) and of submitting these forms to business systems and viewing the results (via HTTP). Unlike EDI, these mechanisms are used interactively over a network connection that can be thought of as a stream of bytes similar to a modem connection over a phone wire. The security problems are identical to those presented by EDI, but the most common solution protects the underlying network connection, and not the HTML or HTTP messages.

The protocol called the Secure Sockets Layer (SSL) was invented to protect HTML-over-HTTP connections, though it can be used with other byte-stream protocols, such as FTP or Telnet. When a consumer connects to a merchant's server over the Internet, SSL sets up a unique cryptographic key used to protect the privacy and integrity of the remaining bytes of the network session. In the simplest case, only the merchant has a digital certificate and the public key inside that certificate is used to transmit the shared secret key chosen by the consumer.

The merchant can send the consumer a certificate over an insecure channel, since the consumer can verify the certificate's integrity and authenticity with a global-root public key compiled into the consumer's software. The consumer then picks a fresh random master key and sends it to the merchant encrypted with the merchant's public key. Only the intended merchant will be capable of decrypting this message.

Now both parties share a secret master key, and from that point on they can derive keys for encryption and integrity checking. Henceforth, no one can see the contents of the data stream, so the consumer can fill out HTML forms that include such sensitive information as account numbers, passwords,and stock purchase transactions. The more advanced form of SSL includes digital certificates for both merchant and consumer, providing for greater authentication and for the creation of a shared master key that depends on random values chosen by both parties.

The goods sold and sent over the Internet include such things as news articles and software. With software, consumers want to protect themselves against viruses. With news articles, they want to be sure of the information authenticity. Digital signatures can address both of these concerns. If an application appears to be from Microsoft, the consumer can check its digital signature to prove its provenance and to ensure that it has not been altered by an attacker. Of course, attackers, too, can get digital certificates and create digital signatures, but they will not be able to masquerade as a well-known business.

Looking forward

Existing security mechanisms can be combined to create systems that greatly reduce the risks of electronic commerce. Doing so will crucially promote its large-scale adoption, especially in international business, where the cost of legal disputes involving more than one country is inordinately high. New mechanisms are needed to handle the increasing scale of electronic commerce, though digital certificates and hierarchies of certificate-issuing entities provide an adequate start.

Tax and tariff laws interact with all electronic commerce systems. Normally, a purchase is a transaction between two people--the seller and the buyer--but the governments at both ends are also involved. Strong privacy may tempt some people to bypass local, regional, or international laws. The simplistic approach is to outlaw strong privacy. Better ones would extend traditional business record-keeping laws and add features for unannounced tests of compliance.

Illegal commerce is not a new problem. Governments currently balance the costs of tight enforcement against the benefits of an open growing economy. The need for high cryptographic security in international electronic commerce systems increases both the costs and the benefits. Each country will have to find its own new balance.

About the Authors

Robert W. Baldwin is a senior engineer at RSA Data Security Inc., Redwood City, Calif., where he is responsible for the SET engine product for secure credit-card transactions over the Internet. He has designed security products, and led teams that built them, at Oracle Systems, Tandem Computers, and LAT.

C. Victor Chang is vice president of engineering at RSA Data Security. Before joining the company, he spent seven years at Apple Computer Inc., where he managed the development of collaboration software technologies and served as the cross-functional program manager for the Macintosh System 7 Pro which includes digital signatures and other cryptographic features. At Intel, Xerox, and First Pacific Networks he led development of communication, network server, and real-time operating system products.

To Probe Further

Applied Cryptography: Protocols, Algorithms and Source Code in C, 2nd edition, by Bruce Schneier (John Wiley & Sons, New York, 1995), is a compendium of security mechanisms and protocols that use cryptography to counter many threats.

 

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions