View From the Valley iconView From the Valley

$100,000 Prize If You Can Find This Secret Command in DOS

There have long been rumors that Microsoft copied CP/M to create MS-DOS for the IBM PC. Consultant Bob Zeidman in 2012 used forensic software tools to analyze the code for IEEE Spectrum and found no evidence of copying, as he reported in “Did Bill Gates Steal the Heart of DOS?”  Since he did that analysis, Microsoft donated previously unavailable source code for MS-DOS to the Computer History Museum. (Zeidman did his original analysis using QDOS.). And the museum also located and released a more complete version of the CP/M source code. Zeidman reran his analysis and presented the results 6 August at the Vintage Computer Festival West.

The conclusion? Still no sign of copying of source code. And no evidence to support a long-running rumor that there is a secret command in MS-DOS that can be called to print out a copyright notice in Gary Kildall’s name.

However, Zeidman did find that at least 22 system calls, the commands used to request an action, like sending text to a printer or reading from a hard disk, had the same function number and function. That, he says, might have meant that Kildall “might have had a copyright claim for the system calls that it could have litigated against Microsoft. On the other hand, there is a good chance Microsoft could have beaten such litigation by claiming it was a ‘fair use.’ ”

Meanwhile, Zeidman has had a lot of pushback since his original article was published, in part because it turned out he had an open consulting contract with Microsoft at the time. (He reports that it was an old and inactive contract and he has since severed all ties.) So he’s putting up $200,000 in prize money, $100,000 for anyone who can use “accepted forensic techniques” to prove the copying, and another $100,000 for anyone who can find that secret Kildall copyright function.

His full analysis of the code and supporting documents are here.

A Peek Inside Andy Rubin’s Playground

Playground Global, an effort to make it easier for Silicon Valley hardware startups to make their ideas real, came on the scene in 2015. A group of investors including Hewlett-Packard, Google, Hon Hai Precision Industry Co, Seagate, and others—and led by Andy Rubin of Danger and Android fame—backed the effort with at least US $48 million. The mission: create a hardware “studio.” It’s something more than an incubator or an accelerator, in that it allows inventors to focus on their gadgets and takes away as many of the barriers to doing that as possible .

I got my first peek inside Playground’s spacious headquarters last week. And it sure seems to have everything a Silicon Valley engineer could want, and then some. The setup includes: two tricked out engineering labs, a prototype-building area with multiple 3-D printers, various cutters, a laser-sintering machine, a testing lab, an optical lab, lots and lots of bench space, and 50 engineers available to help the startups. (Though it has room for 30 startups, only about a dozen seemed to be in residence during my visit.)

Playground has the ambience of a Google or Facebook, not a shoestring startup. That means free gourmet meals, a well-stocked snack bar, and an espresso machine that requires a training session to operate, along with the most stylish “quiet room” space I’ve seen (think giant modernized London phone booths). And, of course, Playground has a playground, in addition to a slide—which is a pretty common way to get from the second floor to the first at tech companies these days—there’s also a swingset.

All this is spread out inside a former apricot cannery, a Palo Alto building that I must have passed thousands of times but never noticed. It’s tucked behind Fry’s Electronics, long a mecca for hardware engineers needing a random component immediately, if not sooner.

CP/M operating system inventor Gary Kildall and his memoir.

CP/M Creator Gary Kildall’s Memoirs Released as Free Download

The year before his death in 1994, Gary Kildall—inventor of the early microcomputer operating system CP/M—wrote a draft of a memoir, “Computer Connections: People, Places, and Events in the Evolution of the Personal Computer Industry.” He distributed copies to family and friends, but died before realizing his plans to release it as a book.

This week, the Computer History Museum in Mountain View, with the permission of Kildall’s children, released the first portion of that memoir. You can download it here.

Wrote Scott and Kristin Kildall in an introductory letter: “In this excerpt, you will read how Gary and Dorothy started from modest means as a young married couple, paved a new path for start-up culture, and embraced their idea of success to become leaders in the industry. Our father embodied a definition of success that we can all learn from: one that puts inventions, ideas, and a love of life before profits as the paramount goal.”

Later chapters, they indicated, did “not reflect his true self,” but rather his struggles with alcoholism, and will remain unpublished.

ConnectedYard founder Justin Miller holds pHin, a gadget that puts swimming pools and hot tubs onto the Internet of Things (IOT)

Uber + Instacart + the IoT—for Your Swimming Pool

Justin Miller’s resume is a tale of tech history: AT&T, Apple, Webvan (ouch), eBay, Plaxo…and then running a 250-person innovation center for Comcast. He left Comcast in 2014, looking to do a startup of some kind. And then, at a backyard barbecue, standing around a pool with a drink in hand, he ran one idea he’d been batting around past Mark Janes, a serial entrepreneur who had founded several startups in the United Kingdom before moving to the United States.

Miller’s idea: simplifying pool and hot tub care, in the way that Uber simplifies travel, and Instacart simplifies grocery delivery. But he wanted to take it one step further, by putting people’s pools on the Internet of Things so they can manage much of their own day-to-day maintenance.

The idea is a mix of just about everything hot in tech today: on-demand services, the IoT, and cloud. That could make it a joke. Remember “Pandora for Cats”? Or it could be that this kind of technology is positioned to catch a big wave.

Miller, of course, thinks it’s the latter. Janes agreed, and in the third quarter of 2014, the two started ConnectedYard. Since then, they’ve convinced Andy Rubin, cofounder and former CEO of both Danger and Android, along with Tandem Capital, and a number of other VCs, that the next big thing is indeed the connected pool.  ConnectedYard has raised US $2.4 million so far, along with support in the form of office space, hardware engineering and other resources from Rubin’s Playground Global. Meanwhile, it has hired 30 people.

The pool gadget (really a device + app + system) they’ve built is called the pHin. Its hardware is a waterproof package of chemical sensors, a processor, a battery, and a bluetooth radio that is shipped with a starter kit of color-coded pouches of chemicals. The sensor package bobs around in the pool or hot tub, and communicates with a small WiFi bridge plugged in somewhere nearby. The sensor in the water sends regular updates to the bridge, reporting the water’s chlorine levels, pH, and temperature. The bridge then relays the data to ConnectedYard’s cloud server (the IoT part), and the server sends a notification to the user’s Apple or Android phone whenever action needs to be taken to balance the chemicals in the pool. The user follows the instructions to toss in the instructed number of different colored pods—say, two green (shock, a common pool treatment) and one blue (chlorine). The system keeps track of the number of pods used, and ships out new supplies every month or two, before the user runs out of them (that’s the Instacart part). If the pool needs more complicated service than a chemical adjustment, say, a filter repair—the phone app brings up a menu for on-demand pool servicing (that’s the Uber part).

img

ConnectedYard has priced the product as a subscription service; sign-up costs $499 for year-round pools and $399 for year-round hot tubs (and less for seasonal use). The hardware will come free with a subscription. It’s likely to be replaced every two years, which is the life of the chemical sensors and battery. (It won’t need recharging.)

The team had to overcome a lot of challenges to make pHin happen. Among them were lining up manufacturers for the chemical pods, recruiting a network of pool service firms, and writing the cloud software that tracks chemicals in the pool and in inventory for what he hopes will be upwards of hundreds of thousands of users. But getting the hardware to work was particularly tricky, he says. The sensors that measure water chemistry have been around for a decade, Miller says, but it was difficult to reduce the power consumption enough to make battery operation feasible. Antenna range, for a gadget that can drift around a large pool, was also problematic. And, he says, keeping the whole thing waterproof—particularly in a steamy hot tub, was more difficult than expected.

Another stumper: The gadget is essentially a computer, and computers, on occasion, need to be rebooted. Putting in some kind of external button, they realized, would make it harder to waterproof. The solution turned out to be a magnetic switch. The pHin ships with a magnet it its cap, to be used to swipe the internal switch that turns the gadget on for initial installation as well as for rebooting.

img

On the software side, Miller says ConnectedYard’s initial mistake was tracking the water quality too closely, triggering frequent alerts. “We needed to change that to average out the bumps and make the system behave more like a human than a machine. [We wanted it] to be accurate, but not overly demanding. The idea is to make pool care easier, not harder.” The system, Miller says, can’t be easily hacked, because the floating sensor system only sends out data, and doesn’t receive any.

ConnectedYard which just started shipping the product, expects to have over 10,000 customers in the continental United States by the end of this year.

The company says it isn’t stopping with getting the pool on the IoT, but isn’t announcing its next product just yet. However, because the founders were at a barbecue when they started this whole thing, I’m told it shouldn’t be too hard to figure out.

Image on a phone of a person standing next to Pikachu Pokemon

Pokemon Go Is AR’s Foot in the Door to Our World

Taking only 13 hours to reach the top of the highest grossing app chart in the United States, Pokémon Go broke a number of world records in its first week, and is now the phone app with which people spend the most time daily, surpassing Facebook, Snapchat, and Twitter. Even Niantic, the company behind Pokémon Go, wasn’t prepared for this level of popularity, and its servers are struggling. Many non-players, walking around those determined to “catch them all,” are wondering when (and hoping that) this craze will end. But even they will agree that the launch of Pokémon Go will mark a place in history, and it is likely to influence the direction of game development and player interactions for years to come.

Technically, Pokémon Go is not an augmented reality (AR) game—in that it is not really integrating the Pokémons into the user’s perception of the world—though some believe the overlay of the cartoon characters on the camera’s view of reality might be the earliest glimpse to what AR games would look like in the future. The game, as it stands, is more of a location-based game. True AR will come in the form of headsets that seamlessly fuse the virtual and the real from companies like Meta and Magic Leap.

 Pokémon Go, however, is indeed playing a critical role in democratizing the idea of AR, so that everyday users will be more prepared to adopt the technology when it is later introduced.  Shifting the public’s understanding of what is “normal” is critical for the adoption of emerging technology. Remember when the hands-free feature first became available for cellular devices? People walking in the streets, ostensibly mumbling and laughing to themselves, confused many nonusers. Nowadays, people barely even notice this. Likewise, today’s Pokemon Go players look odd to outsiders, but norms will shift here as well.

Read More
ThinAir Labs CEO Tony Gauda says his company's cybersecurity tech can keep files out of the hands of hackers

Cybersecurity Startup: If the DNC Had Our Security Tech, It Could Have Bricked Those Files

I generally cover consumer technology, not enterprise. And particularly not enterprise IT security, which can be extremely arcane. But given this week’s political developments, with the Wikileaks release of the Democratic National Committee’s email (suspected to have been purloined by Russian hackers), security has certainly been on my mind.

Silicon Valley startup ThinAir Labs made me an interesting promise: that they could explain their security approach so concretely and succinctly that I would understand the gist of it in less than an hour, if not minutes. So I made a quick trip to their downtown Palo Alto offices to check it out. And company founder and CEO Tony Gauda was right; it’s an easy concept to grasp (though it took him three years to implement).

Gauda spent much of his career building fraud prediction systems for Mastercard—systems that, he says, were the first to detect the massive security breaches at Target and Home Depot. He then went on to found BitCasa, a company that offered encrypted cloud storage for consumers. ThinAir Labs, founded in 2013, combines both approaches, he says.

Generally, when the company’s software, ThinAir, is running on a computer, it’s monitoring behavior, in the same way credit card fraud prediction systems monitor behavior, Gauda explains. If the system spots something odd, say, a number of files are being rapidly opened in succession, at a pace far faster than you’d be able to do manually, or screen shots are being taken of documents, or documents are being copied and sent somewhere you’ve never had contact with previously, you’ll get an alert on your phone that will not only tell you what is happening with your computer, but where.

“People’s behavior is usually consistent, they open things at a human speed, they run normal applications. Malware and hackers have different behavior. Copying two terabytes of data isn’t normal behavior,” says Gauda.

In the case of the DNC emails, the system, Gauda says, would have spotted that files were being copied and showed on a map where they were going to so a system administrator could have stopped the export and given investigators a good idea of where the hack originated.

For confidential files, Gauda says, ThinAir has another layer of security: a virtual vault on the user’s computer. To protect a file, the user puts it into the “safe.” That locks the file so only authorized users can access it. It stays protected when it comes out of the safe, whether dragged, copied, or emailed. Others will be able to open the file if they are people that the user normally allows to open these types of files (that behavior tracking again); when new correspondents get the file and try to open it, the documents creator will get an alert asking for authorization. If files are stolen, the document owner can lock them all up instantly. The system works with any kind of file, even ones it has never seen before. “You could be a spook agency with an app that has never seen the light of day, and this would still work out of the box to protect it,” Gauda says.

“We deeply protect files in the safe, and monitor the files outside the safe,” Gauda says. “If one of the protected files gets stolen, we have a record of everyone who touched it, we can understand the scope of the breach, we know where it happened geographically, and we can go back and undo it. In the case of the DNC emails, the system would have noticed that tons of data were being exfiltrated because files were being opened. It could see where they were going—to the Kremlin, perhaps? And the DNC might have received an alert indicating, say, that Putin has requested access.”

The simplicity, Gauda indicates, is hugely important, because people don’t use things that aren’t simple. “People and organizations make bad security decisions all the time, especially in the government,” he says, “just turn on the news any night to see it.”

“We make the default behavior secure, rather than requiring people to change their behavior.”

ThinAir will be launching next week at the Black Hat USA conference in Las Vegas. The company plans to charge a monthly fee; the price has yet to be announced.

The VCR: The disputed territory that Ampex, Betamax, JVC, and Sony fought over—and that we all eventually landed on.

Good-bye, VCR, You Served Us Well

The VCR. I wasn’t an early adopter, or an extreme user. But the VCR indeed played a featured role in different stages of my life. It helped me keep my standing date with the U.S. TV show “Thirtysomething” back in the 1980s. In the ’90s, it played “Barney & Friends” over and over when that was the only relief my then 2-year-old could get from the pain of the coxsackievirus. And for at least a decade, it let my mother across the country see her grandbabies take their first steps, sing their ABCs, and reach all sorts of other milestones she would have otherwise missed.

I have professional memories of the VCR as well. I visited the source of the video recording, Ampex, just once, though I drove past its iconic sign for years. That visit was a true pilgrimage: The VCR wasn’t the only thing that came out of that company; so did Ray Dolby, Nolan Bushnell, and a host of other engineers who changed the entertainment industry. Over the years, I also met with engineers at Sony and JVC, the companies that brought the videocassette recording to the mass market, and whose battle over consumer VCR formats is legendary. I told the story about the development of the VCR on the occasion of IEEE Spectrum’s 25th anniversary in 1988, recalling how the engineers at Sony and JVC made different design choices along the way to the same goal. You can read that article here.

I still have a VCR in my cabinet and a handful of favorite obscure movies that never made it to DVD sitting on a shelf. And so this month’s news—that the last-known company to continue to manufacturer VCR hardware is stopping production because it can no longer get parts—sent me gazing fondly at that now-irreplaceable gadget (my current model is a Go-Video VCR/DVD combo).

But my guilty-pleasure TV shows are available for streaming to my laptop (and watching with headphones) late at night. The kid videos were repurchased as DVDs, and now those are mostly obsolete, thanks to Netflix, Amazon Prime, and other on-demand services. My mom left this world before the VCR did; I expect I’ll be live-streaming with my grandchildren.

So good-bye, VCR, and thank you. You served us well.

Photo of VCR, VHS, videocassette recording tapes

Revisiting the VCR’s Origins

1975: The VCR

JVC and Sony transformed an ingenious concept pioneered by Ampex into a major industry

(The following article was published in IEEE Spectrum in a special anniversary issue in 1988)

Consumer electronics companies worldwide felt sure that the public would be interested in a machine that would tape their favorite television programs in their absence for replay at home at their leisure. But in 1971, there were no such products on the market for consumers, and there was still some debate over what exactly people wanted. Two companies determined to solve both problems were Sony Corp. of Tokyo and The Victor Co. of Japan, known as JVC Ltd. Yokohama.

Obviously, that product had to include the convenient cassette. In 1962, the cassette had won over the mass market to audio tape recording, which until then had interested only audiophiles. But “the video problem was 10 times as complex as the audio problem,” explained Joseph Roizen, a former Ampex Corp. engineer who is now a consultant for the television and video industries.

Video signals range up to 4.2 megahertz and contain far more information than audio signals, with their 20-kilohertz maximum. An audio tape is simply pulled past an immobile recording head; but most videocassette recordings use helical scanning, with the tracks running at a diagonal across the tape and with the tape typically spiraled around a rotating drum with two or more recording heads on it. Therefore, unlike audio tape, which is left in the cassette and simply moved past the recording head, videotape must be literally pulled out of the cassette and wrapped around the drum, without ever slipping out of position.

By 1971, several companies had already built videotape players that used some type of cassette and tried to sell them to consumers—but failed. Ampex, of Redwood City, Calif., had briefly attempted to develop a product called InstaVideo that used tape cartridges. (A cartridge has only one reel, the supply reel, the take-up reel being built into the player, whereas cassettes have both supply and take-up reels built in.)

The InstaVideo (also called InstaVision) project died soon after it was brought to market. One of its problems was the cartridge, which was less reliable than a cassette and sometimes frustrated users. The group also could not get the cost down to a reasonable consumer price. Another problem, explained Roizen, was that Ampex had earned its reputation in the professional video realm, so that the sales force never seriously marketed the InstaVideo product, nor did distributors and retailers perceive it as a supplier of consumer video products.

A consortium of New York City businessmen with no experience in consumer electronics formed a company called Cartridge Television Inc. to launch a cartridge-based consumer video recorder—Cartrivision. The group spent huge sums on marketing and advertising but went bankrupt when tape problems necessitated a short recall. (For several years afterward, enterprising engineers were buying the unpackaged guts of the units for less than $100, packaging them, and reselling them.)

CBS Inc., in New York City, tried a different approach: a film cassette for home viewing of theatrically released movies, called EVR. This format could not record, however, and consumers were not interested. (Many of these failed formats are displayed at the Ampex Museum of Magnetic Recording, Redwood City.)

Sony meanwhile had developed the U-format or “U-matic,” a cassette-based recorder—in collaboration with JVC and Matsushita Electric Industrial Co., Osaka, and with licenses from Ampex—and had introduced it as a standard for VCRs in 1971. But the $2000 recorders and the $30 cassettes (in 1988 dollars about $6000 and $90) were big and heavy. The VCR unit measured 24.25 by 8.125 by 18.125 inches (61.6 by 20.6 by 46.4 centimeters) and weighed in at 59.5 pounds (27 kilograms). Consumers were again unimpressed, and the companies quickly retargeted the product to the educational and industrial training markets, where the U-format proved popular.

Smaller and cheaper

But as consumer product companies, neither Sony nor JVC was satisfied with the limited educational and industrial markets. They knew that to appeal to consumers they had to develop a VCR that was both smaller and cheaper than the U-format.

The companies hoped to work together to establish a standard for helical-scanning videocassette recorders using tape that was half an inch (12.5 millimeters) wide, which, said Roizen, “they were gong to flood the world with.” They easily agreed that the tape width should be reduced to a half inch, rather than the three-fourths of an inch. specified in the U-matic design. Then the trouble started.

Masaru Ibuka, the founder of Sony, who in the early 1950s had decreed that his engineers were to design a transistor radio the size of a man’s shirt pocket, came into the Sony offices one day, tossed the company’s employee handbook onto a table, and told his employees that the target for their VCR project was to be a videocassettes smaller than that handbook. The size of a standard American paperback (150 by 100 by 15 mm), it was to hold at least one hour of color video, he said.

Meanwhile, the then general manager of JVC’s Video Products Division, Shizuo Takano, decided that it was time for JVC to come up with a worldwide standard for home video. To get things going, the general manager of JVC’s Research and Development Division, Yuma Shirashi, drew up a matrix of requirements that was not quite as simple as the size of a paperback.

One key requirement of the system was a “more-than-two-hour recording capacity” because he noticed that movies and sporting events typically lasted two hours.

Sony showed a prototype of its proposed Betamax format VCR to Matsushita and a few other Japanese companies in 1974. According to Japanese trade paper, the chairmen of Sony and Matsushita met in secret late at night on the subway, with the Matsushita side arguing that it had found a way to get two hours of playing time on a cassette only a third bigger than a paperback book, with the Sony side unyielding on size and unwilling to go to a lower playing speed, which would make high picture quality harder to achieve.

Both Sony and JVC claim that their original VCR models had offered 240 lines of horizontal resolution and a signal-to-noise ratio of about 45 decibels. Frank Barr, who tests video products for Advanced Product Evaluation Laboratories in Bethel, Conn., said that at the top of the line, the early Betamax models by Sony had a slightly better resolution and signal-to-noise ratio than JVC’s early VHS models. One reason for this slight difference lies in the selection of carrier frequencies‑the VHS carrier signals fall between 3.4 and 4.4 megahertz, the Betamax signals between 4.4 and 5.g MHz, the greater bandwidth allowing higher resolution. Though this difference was almost indiscernible, it led videophiles to recommend Betamax as the ultimate format, Barr said.

After discussing the matter for about a year, the companies still would not compromise their primary design goals‑paperback size versus two hours playing time‑so they decided to go their separate ways. (A Sony spokesman told IEEE Spectrum that, to this day, “Quite frankly, it is our believe that the VHS format was realized only after a thorough study of the Betamax system.” JVC, on the other hand, said that VHS was an independent design effort based on the matrix of goals drafted in 1971, and that when the company saw the Betamax and what JVC viewed as its fatally short recording time, its own product was only about 18 months from going into production.)

Whatever the real story may have been, Roizen said, “The monolithic Japan Inc. was split.”

In addition to tape width, the companies were agreed on the use of helical-scanning technology. In audio tape recoding, the recording head stays put and lays a longitudinal track on the moving tape. In early professional video recording, four heads on a rotating drum laid tracks directly across the width of the tape.

With the quad format, as it was called, information could be more densely packed then with the longitudinal format; also, because the tracks were so short, problems with tape stretching were reduced. On the other hand, one track could not hold all the picture information in a frame, which was therefore separated into 16 tracks, with each track read by one of the four heads on the drum. Differences in head quality and alignment led to banding on the screen or “venetian blind” effects.

Helical scanning, which warps the tape around the drum at an angle, like a candy cane’s stripes, has the advantage of quad recording—reducing problems caused by tape stretching—but not its drawback—each slanted stripe can carry a full frame.

Going to a ½-inch tape in a reasonably small cartridge required a number of technological advances that, working together, reduced tape consumption from approximately 8 square meters per hour for the U-format to approximately 2 m2/h for the VHS and Betamax units (the writing speed of VHS is slightly lower than that for Betamax: 5.80 meters per second versus 7.00 m/s). For one thing, advances in IC technology made by Sony and other companies allowed VCRs to produce a better picture with less noise (the signal-to-noise ratio in the U-matic was 40 dB as against the 45dB claimed for the first Betamax and VHS recorders).

Moreover, improvements in video heads reduced their gap size by about a factor of 10, to 0.3 micrometer, allowing the tracks they wrote and read to be smaller and thereby increasing recording density. Also, advances in magnetic tape (specifically, the use of a cobalt alloy for the magnetic coating) increased its sensitivity and made it possible to pick up very short wavelengths.

Changing the guard

Besides the industrywide advances in IC, head, and tape technology, Sony and JVC found means, albeit slightly different, of adapting to their products another recording technique that increased information density.

A technique called azimuth recording had been used in black and white videotape recording since the late 1960s. In azimuth recording, the video heads are mounted at angles—tilting one to the left and one to the right—from the perpendicular to the run of the tape. Because the tracks recorded by these heads are not parallel to each other, the heads are less likely to pick up the wrong signal.

Sony tried to apply this technique to color video recording in the U-matic, but it did not work. The color signals, which use lower frequencies than black and white signals, interfered with each other, and Sony had to leave blank spaces of tape as guard bands between the video tracks.

A researcher at Sony, Nobutoshi Kihara, continued to work on this problem even after the U-matic went into production. He developed a phase-inversion system, recording the color signals on adjacent tracks 180 degrees out of phase with each other, to eliminate interference between the signals.

JVC meanwhile came up with its own solution—a phase-shift system, recording each color signal 90 degrees out of phase with adjacent tracks. Both solutions let the companies eliminate the tape-wasting guard bands, and both were patented, Sony’s in 1977 and JVC’s in 1979.

M for manufacturability

While Sony was content to duplicate in its Betamax the U-loading mechanism developed for the U-matic, JVC instead used a system it called M-loading. JVC says that M-loading made the machine easier to manufacture, more compact, and more reliable, because the tape guide poles did not contain moving parts. Sony argued that M-loading was not superior and that U-loading only looked more complicated, whereas in reality it was a simple mechanical apparatus and indeed better than M-loading because it reduced stress on the tape (an M-loaded tape wound around two sharp turns, a U-loaded tape wrapped around one pole only).

Others say that both U- and M-loading solved the same design problem, and neither had a major advantage.

With U-loading, a single arm reaches into the cassette, pulls out the tape, and wraps it around the head. With M-loading, two arms, on either side of the recording head, grab the tape and pull it against the head, the arms traveling a much shorter path than the U-loading arm.

M-loading allowed JVC’s machine to be more compact than Sony’s—so much so that the unit was half the volume and still left more room between components than the Sony design. U-loading made it easy for Sony to add a picture-search function (fast-forward while still viewing the image) to its design, while JVC had a slightly harder time adding picture-search to its machines. (M-loading as initially designed put so much stress on the tape that the tape could not be allowed to run at high speeds without first being drawn back into the cassette, away from the head. JVC solved this problem by changing the stationary guide poles to rotating guide poles.)

To record for a longer time than Sony, JVC used a cassette tape 30 percent larger in volume and, as already noted, a lower writing speed (5.8 m/s versus Sony’s 7.—m/s). Other things being equal, reducing the writing speed reduces the signal-to-noise ratio. JVC said it overcame this disadvantage by giving the signal a greater pre-emphasis boost, increasing the magnitude of some frequency components more than others to reduce noise.

Increasing the signal in this manner, however, leads to bleeding in white areas of the picture. Accordingly, in the JVC design the signal is also first sent through a high-pass filter to eliminate low frequencies, next has its high frequencies amplified and then clipped to stop the bleeding, and finally has the high frequencies recombined with the low frequencies and clipped again.

Sony offered the Betamax to Matsushita and other Japanese companies as a standard. Toshiba Corp. and Sanyo Electric Co. eventually took them up on this offer. JVC persuaded several other Japanese firms to join it in producing VHS machines. In the United Sates, Zenith Corp. initially joined the Betamax group, while RCA Crop. Went with VHS.

Those consumers that marketers call “early adapters”—the technically literate videophiles with money to burn—quickly committed themselves to Sony’s Betamax because of reports that its resolution and signal-to-noise ratio were better. But since few of them—and hardly any consumers in the mass market—could tell a difference in quality between the two formats, the convenience of longer playing time won out, and today the VHS format is clearly the consumers’ choice, particularly in the United States.
The first models were introduced in 1975 and 1976—Sony’s Betamax SL6300 at 229,800 en ($820 at 1975 rates) before JVC’s HR3300 at 256,000 ($915). Then the two formats began converging. Sony responded to JVC’s built-in clock (for unattended recording) with a plug-in timer module for its original units and with built-in timers in its later models. Sony also introduced Beta II, with two hours of playing time, and JVC responded with JVC long-play, a six-hour format.

Both companies steadily worked to improve their picture through better signal processing, magnetic heads, and recording tape, and both added features such as the ability to program the VCRs for weeks at a time. Today both formats boast five to eight hours of recording time, depending on the type of tape used, and horizontal resolutions of between 400 and 500 lines. (These top-of-the-line models, known as the S-VHS and ED-Beta, are not downwardly compatible with earlier units.)

Fumio Kohno, Sony’s managing director, told IEEE Spectrum: “Competition between the Beta and VHS formats has contributed greatly to the improvement of both. It has also stimulated progress in home VCR technology, such as 8 mm video, and in digital audio tape.”

--Tekla S. Perry

The author wishes to acknowledge the help of Joseph Roizen of Telegen
null

Magic Leap Hiring Software Engineers for New Development Lab on Lucasfilm’s San Francisco Campus

Magic Leap, the stealthy Florida-based company that has so impressed investors with its augmented reality demos that they’ve ponied up $1.4 billion to date, is beefing up its efforts to create content. The company has established a development lab on the Lucasfilm campus in San Francisco, located, Venture Beat reported, close to the Yoda fountain.

Last week, Magic Leap founder and CEO Rony Abovitz announced a partnership with Lucasfilm to develop Star Wars-related apps for Magic Leap. They’ll be using what the company calls its “Mixed Reality Lightfield” technology, the details of which are still under wraps. That project will clearly be happening in the San Francisco Lab. But the company isn’t betting that Star Wars games alone will be enough to make the technology take off. It plans to draw all sorts of developers into its San Francisco lab to work on applications for the new augmented reality technology.

Magic Leap may have some serious competition for augmented reality developers; with Pokemon Go capturing the imagination of gamers all over the world, it turns out that the AR explosion didn’t wait for Magic Leap to reveal its technology. AR games can be engaging even if you have to hold a phone up in front of you to peek into an AR world. Pokemon Go could be great news for Magic Leap, in the sense that it’s getting players used to AR technology and whetting their appetites for an AR interface that they don’t have to hold out in front of them. Or it could be a challenge: Can Magic Leap create compelling enough content to justify purchasing an expensive new device.

To support the developers it attracts, Magic Leap is in the process of hiring a team to work with those developers. Its job listings on LinkedIn currently include a number of posts that will be part of its “advanced content research group in San Francisco.” These include software engineers, infrastructure engineer, interaction engineer, and technical director. Some of these positions also are listed on the company’s internal hiring site.

No word yet as to how Magic Leap will reach out to developers for this lab, though it has made a few forays into the development community, for example, through a contest at Twilio’s Signal developer conference. Perhaps it could set out a Pokemon Go lure.

null

More Job Automation But More Jobs Too, Say U.S. Tech CEOs

There’s good news on the technology job search front, according to preliminary results of a survey by consulting firm KPMG. The firm queried 138 U.S. tech CEOs, and 95 percent expect to increase the size of their workforces over the next three years. Some 55 percent expect to grow at least 6 percent. (The full report will be released in August.)

This growth in tech jobs for humans might have been even larger, however, but some positions will be filled by automation and machine learning systems: about three quarters of the tech CEOs expect automation and machine learning tools to replace at least 5 percent of their sales, marketing, technology, and manufacturing workforces.

Or, as the report from KPMG puts it:

“The majority of technology companies plan to increase their human workforce at least 6 percent over the next three years while adding cognitive systems to create a new class of digital labor that can enhance human skills and expertise.”

Bob Melk, president of job search firm Dice, told me that software engineers in particularl should benefit from this trend towards workforce automation and machine learning. Dice sees, he says, “a high demand for software engineers and software developers, representing 7.5 percent of all jobs posted on Dice.”

Does this mean tech companies are going to start issuing reports on their digital labor force?

Updated 15 July 2016

Advertisement

View From the Valley

IEEE Spectrum’s blog featuring the people, places, and passions of the world of technologists in Silicon Valley and its environs.
Contact us:  t.perry@ieee.org

Senior Editor
Tekla Perry
Palo Alto
Advertisement
Load More