Data Byte Life



Any cloud provider offers a data storage service. It can be cold and hot storage, Ice-cold, etc. It is quite convenient to store information in the cloud. But how did they store the data 10, 20, 50 years ago? Cloud4Y has translated an interesting article about just that.

A data byte can be stored in a variety of ways, as new, more advanced and faster storage media appear all the time. A byte is a unit of storage and processing of digital information, which consists of eight bits. In one bit, either 0 or 1 can be written.

In the case of punched cards, the bit is stored as the presence / absence of a hole in the card at a specific location. If we go back a little further to the Babbage Analytical Machine, then the registers that store numbers were gears. In magnetic storage devices such as tapes and disks, bits are represented by the polarity of a specific area of ​​the magnetic film. In modern random access memory (DRAM), a bit is often represented as a two-level electrical charge stored in a device that stores electrical energy in an electric field. A charged or discharged tank holds a data bit.

In June 1956, Werner Buchholz coined the word byte to denote a group of bits used to encode a single character of text.. Let's talk a bit about character encoding. Let's start with the American standard code for information interchange. ASCII was based on the English alphabet, so each letter, number and symbol (az, AZ, 0-9, +, -, /, ",!, Etc.) was represented as a 7-bit integer from 32 to 127. It wasn’t quite “friendly” to other languages. To support other languages, Unicode extended ASCII. In Unicode, each character is represented as a code point or character, for example, lowercase j - U + 006A, where U stands for Unicode followed by a hexadecimal number.

UTF-8 is a standard for representing characters in the form of eight bits, allowing you to store each code point in the range 0-127 in one byte. If we recall ASCII, then this is quite normal for English characters, but characters in another language are often expressed in two or more bytes. UTF-16 is the standard for representing characters as 16 bits, and UTF-32 is the standard for representing characters as 32 bits. In ASCII, each character is a byte, and in Unicode, which is often not entirely true, a character can occupy 1, 2, 3, or more bytes. The article will use various dimensional groupings of bits. The number of bits in a byte varies depending on the design of the medium.

In this article, we will travel in time through various storage media in order to immerse ourselves in the history of data storage. In no case will we deeply study each individual information carrier that has ever been invented. Here is a funny information article that does not in any way claim encyclopedic significance.

Let's start. Suppose we have a data byte for storage: the letter j, either as encoded byte 6a, or as binary 01001010. During our time travel, the data byte will be used in some storage technologies that will be described.

1951




Our story begins in 1951 with the UNIVAC UNISERVO tape drive for the UNIVAC 1 computer. It was the first tape drive designed for a commercial computer. The tape was made of a thin strip of nickel-plated bronze 12.65 mm wide (called Vicalloy) and almost 366 meters long. Our data bytes could be stored at a speed of 7,200 characters per second on a tape moving at a speed of 2.54 meters per second. At this point in the story, you could measure the speed of the storage algorithm by the distance traveled by the tape.

1952




Fast forward a year, on May 21, 1952, when IBM announced the release of its first magnetic tape unit, the IBM 726. Now our data byte can be moved from UNISERVO metal tape to IBM magnetic tape. This new home turned out to be very comfortable for our very small data byte, since up to 2 million digits can be stored on the tape. This magnetic 7-track tape moved at a speed of 1.9 meters per second with a transmission speed of 12,500 digits or 7,500 characters (then called copy groups) per second. For reference: in an average article on Habré about 10,000 characters.

The IBM 726 tape consisted of seven tracks, six of which served for storing information, and one for parity. Up to 400 meters of tape 1.25 cm wide were placed on one reel. The data transfer rate theoretically reached 12.5 thousand characters per second; recording density - 40 bits per centimeter. In this system, the “vacuum channel” method was used, in which the loop of the tape circulated between two points. This allowed the tape to start and stop in a split second. This was achieved by placing long vacuum columns between the tape reels and the read / write heads in order to absorb the sudden increase in tension in the tape, without which the tape would normally burst. A removable plastic ring at the back of the tape reel provided write protection. About 1.1 can be stored on a single reel of tapemegabytes .

Remember the VHS cassettes. What had to be done in order to watch the movie again? Rewind the tape! And how many times did you turn the cassette for the player on a pencil, so as not to waste batteries and get a torn or jammed tape? The same can be said about tapes used for computers. Programs could not just jump over some section of the tape around the tape or accidentally access data, they could read and write data strictly sequentially.

1956




Moving forward several years, in 1956, the era of magnetic disk storage began with the completion by IBM of the development of the RAMAC 305 computer system, which Zellerbach Paper will deliver in San Francisco . This computer was the first to use a moving head hard drive. The RAMAC disk drive consisted of fifty magnetized metal plates 60.96 cm in diameter, capable of storing about five million data characters, 7 bits per character, and rotating at a speed of 1200 rpm. The storage capacity was about 3.75 megabytes.

RAMAC allowed real-time access to large amounts of data, unlike magnetic tape or punch cards. IBM touts RAMAC as a device capable of storing the equivalent of 64,000punch cards . RAMRAC previously introduced the concept of continuous transaction processing as it progresses, so that data can be retrieved immediately while it is still fresh. Now, access to our data in RAMAC could be carried out at a speed of 100,000 bits per second . Previously, when using tapes, we had to write and read sequential data, and we could not accidentally jump to different sections of the tape. Real-time random access to data was truly revolutionary at the time.

1963




Let's fast forward to 1963 when DECtape was introduced. The name comes from Digital Equipment Corporation, known as DEC. DECtape was inexpensive and reliable, which is why it has been used in many generations of DEC computers. It was a 19 mm tape laminated and sandwiched between two layers of mylar on a four-inch (10.16 cm) reel.

Unlike its heavy, large predecessors, the DECtape tape could be carried manually. This made it a great option for personal computers. Unlike the 7-track counterparts, DECtape had 6 data tracks, 2 tag tracks, and 2 for clock pulses. Data was recorded at 350 bps (138 bps). Our data byte, which is 8 bits, but can be expanded to 12, could be transmitted to DECtape at a speed of 8325 12-bit words per second at a tape speed of 93 (± 12) inches per second . This is 8% more digits per second than on the UNISERVO metal tape in 1952.
 

1967




Four years later, in 1967, a small IBM team began working on an IBM drive, code-named Minnow . Then the team was tasked with developing a reliable and inexpensive way to load microcodes into IBM System / 370 mainframes . Subsequently, the project was reassigned and redesigned to download microcode into the controller for the IBM 3330 Direct Access Storage Facility, code-named Merlin.

Now our byte can be stored on read-only 8-inch magnetic coated Mylar floppy disks, known today as floppy disks. At the time of release, the product was called IBM 23FD Floppy Disk Drive System. Disks could hold 80 kilobytes of data. Unlike hard drives, the user could easily transfer a floppy disk in a protective shell from one drive to another. Later, in 1973, IBM released a read / write floppy disk, which then became the industry standard .
 

1969



 In 1969, an on-board AGC (Apollo Guidance Computer) computer with rope memory was launched aboard the Apollo 11 spacecraft, which delivered American astronauts to the moon and back. This rope memory was made by hand and could hold 72 kilobytes of data. The production of rope memory was time-consuming, slow and required skills similar to weaving; months could take months to weave the program into a rope memory . But it was the right tool for those times when it was important to fit a maximum in a tightly limited space. When the wire passed through one of the circular veins, it was 1. The wire passing around the vein was 0. Our data byte required a few minutes of weaving into the rope from a person.

1977




Commodore PET, the first (successful) personal computer, was launched in 1977. PET used the Commodore 1530 Datasette, which means data plus cassette. PET converted the data into analog audio signals, which were then stored on cassettes . This allowed us to create an economical and reliable solution for data storage, although it is very slow. Our small byte of data could be transmitted at a speed of about 60-70 bytes per second . Cassettes could hold about 100 kilobytes on the 30-minute side, with two sides per tape. For example, on one side of the cassette, about two 55 KB images could be placed. Datasette was also used on the Commodore VIC-20 and Commodore 64.

1978




A year later, in 1978, MCA and Philips introduced LaserDisc under the name Discovision. Jaws was the first film sold on LaserDisc in the United States. The sound and video quality on it was much better than that of competitors, but the laser disc was too expensive for most consumers. It was impossible to record on LaserDisc, unlike VHS tapes on which people recorded television programs. Laser discs worked with analog video, analog FM stereo sound, and pulse-code modulation , or PCM, digital audio. The disks had a diameter of 12 inches (30.47 cm) and consisted of two single-sided aluminum discs coated with plastic. Today LaserDisc is remembered as the basis of CD and DVD.

1979




A year later, in 1979, Alan Schugart and Finis Conner founded Seagate Technology with the idea of ​​scaling a hard drive to a size of 5 ¼-inch floppy disk, which was then standard. Their first product in 1980 was the Seagate ST506 hard drive, the first hard drive for compact computers. The disk contained five megabytes of data, which at that time was five times larger than a standard diskette. The founders managed to achieve their goal - to reduce the size of the disk to the size of a 5¼-inch floppy disk. The new data storage device was a rigid metal plate coated on both sides with a thin layer of magnetic material for data storage. Our data bytes could be transferred to disk at a speed of 625 kilobytes per second . This is about such a GIF .

1981




Fast forward a couple of years, to 1981, when Sony introduced the first 3.5-inch floppy disks. Hewlett-Packard was the first to follow this technology in 1982 with its HP-150. This glorified 3.5-inch floppy disks and gave them wide distribution in the industry . Floppy disks were one-sided with a formatted capacity of 161.2 kilobytes and an unformatted capacity of 218.8 kilobytes. A two-way version was released in 1982, and the consortium of Microfloppy Industry Committee (MIC), consisting of 23 media companies, based the specification of the 3.5-inch floppy disk on Sony's original design, fixing the format in history as we know it. Now our data bytes can be stored on an earlier version of one of the most common media: a 3.5-inch floppy disk. Later, a pair of 3.5-inch floppy disks with the Oregon Trail became the most important part of my childhood.

1984




Shortly afterwards, in 1984, a CD with read-only data was announced (Compact Disc Read-Only Memory, CD-ROM). These were 550 megabytes CD-ROMs from Sony and Philips. The format grew from digital audio CDs, or CD-DAs, which were used to distribute music. The CD-DA was developed by Sony and Philips in 1982 with a capacity of 74 minutes. According to legend, when Sony and Philips were negotiating the CD-DA standard, one in four people insisted that it could accommodate the entire Ninth Symphony. The first product released on the CD was the Grolier Electronic Encyclopedia, released in 1985. The encyclopedia contained nine million words, which took only 12% of the available disk space, which is 553mebibyte . We would have more than enough space for the encyclopedia and data byte. Shortly afterwards, in 1985, computer companies worked together to create a standard for disks so that any computer could read information from them.

1984


Also in 1984, Fujio Masuoka developed a new type of memory with a floating shutter called flash memory, which was able to be erased and rewritten many times.

Let's dwell on flash memory using a floating gate transistor. Transistors are electric gates that can be turned on and off individually. Since each transistor can be in two different states (on and off), it can store two different numbers: 0 and 1. The floating gate refers to the second gate added to the middle transistor. This second gate is insulated with a thin oxide layer. These transistors use a small voltage applied to the gate of the transistor to indicate whether it is on or off, which, in turn, translates to 0 or 1.
 
With floating gates, when the corresponding voltage is applied through the oxide layer, electrons pass through it and get stuck on the gates. Therefore, even when the power is turned off, the electrons remain on them. When there are no electrons on the floating gates, they are 1, and when the electrons get stuck - 0. The reverse course of this process and applying a suitable voltage through the oxide layer in the opposite direction causes the electrons to pass through the floating gates and restore the transistor back to its original state. Therefore, the cells are made programmable and non-volatile . Our byte can be programmed in a transistor, like 01001010, with electrons, with electrons stuck in floating gates, to represent zeros.

Masuoka's design was slightly more affordable, but less flexible than the electrically erasable PROM (EEPROM), as it required several groups of cells that had to be erased together, but this was also due to its speed.

Masuoka was working for Toshiba at the time. In the end, he went to work at Tohoku University, as he was unhappy that the company did not reward him for his work. Masuoka sues Toshiba for compensation. In 2006, he was paid 87 million yuan, equivalent to 758 thousand US dollars. This still seems inconsequential given how influential flash memory has been in the industry.

Since we're talking about flash, it's also worth noting the difference between NOR and NAND flash. As we already know from Masuoka, flash stores information in memory cells consisting of floating gate transistors. Technology names are directly related to how memory cells are organized.

In NOR flash memory, individual memory cells are connected in parallel, providing random access. This architecture reduces the reading time required for random access to microprocessor instructions. NOR flash memory is ideal for lower density applications, which are mostly read-only. That is why most CPUs load their firmware, as a rule, from NOR flash memory. Masuoka and colleagues presented the invention of NOR flash in 1984 and NAND flash in 1987.

NAND Flash developers have abandoned the possibility of random access to get a smaller memory cell size. This gives a smaller chip size and lower cost per bit. The NAND flash architecture consists of eight-part memory transistors connected in series. Thanks to this, a high storage density, a smaller memory cell size, as well as faster recording and erasing of data are achieved, since it can simultaneously program data blocks. This is achieved due to the need to overwrite the data when they are not written sequentially and the data already exists in the block .

1991


Let's move on to 1991, when a prototype solid-state drive (SSD) was created by SanDisk, then known as SunDisk . The design combined an array of flash memory, non-volatile memory chips and an intelligent controller to automatically detect and fix defective cells. The disk capacity was 20 megabytes with a 2.5-inch form factor, and its cost was estimated at about $ 1,000. This disk was used by IBM on a ThinkPad computer .

1994




One of my personal favorite media since childhood was Zip Disks. In 1994, Iomega released the Zip Disk, a 100-megabyte cartridge in a 3.5-inch form factor, about a bit thicker than a standard 3.5-inch disk. Later disks could store up to 2 gigabytes. The convenience of these disks is that they were the size of a diskette, but had the ability to store more data. Our data bytes could be written to a Zip drive at a speed of 1.4 megabytes per second. For comparison: at that time 1.44 megabytes of a 3.5-inch floppy disk were recorded at a speed of about 16 kilobytes per second. On a Zip disk, the heads read / write data non-contact, as if flying above the surface, which is similar to the operation of a hard disk, but differs from the principle of operation of other diskettes. Soon, Zip disks became outdated due to reliability and availability issues.

1994




In the same year, SanDisk introduced CompactFlash, which was widely used in digital video cameras. As with compact discs, the CompactFlash speed is based on x ratings such as 8x, 20x, 133x, etc. The maximum data transfer speed is calculated based on the transmission speed of the original audio CD, 150 kilobytes per second. The transfer rate looks like R = Kx150 kB / s, where R is the transfer rate and K is the nominal speed. Thus, for the 133x CompactFlash, our data byte will be written at 133x150 kB / s or about 19 950 kB / s or 19.95 Mb / s. The CompactFlash Association was founded in 1995 with the goal of creating an industry standard for flash memory cards.

1997


A few years later, in 1997, a rewritable compact disc (CD-RW) was released. This optical disk was used to store data, as well as to copy and transfer files to various devices. CDs can be rewritten about 1000 times, which at that time was not a limiting factor, since users rarely dubbed data.

CD-RWs are based on surface reflectance technology. In the case of CD-RW, phase shifts in a special coating consisting of silver, tellurium and indium cause the ability to reflect or not reflect the read beam, which means 0 or 1. When the compound is in a crystalline state, it is translucent, which means 1. When the compound melts in an amorphous state, it becomes opaque and non-reflective, whichmeans 0. Thus, we could write our data byte as 01001010.

DVDs eventually occupied most of the market with CD-RW.

1999


Let's move on to 1999, when IBM introduced the smallest hard drives in the world at that time: IBM microdisks with a capacity of 170 and 340 MB. These were small 2.54 cm hard drives designed for installation in CompactFlash Type II slots. It was planned to create a device that will be used as CompactFlash, but with a larger memory capacity. However, they were soon replaced by USB flash drives, and then larger CompactFlash cards when they became available. Like other hard drives, microdrives were mechanical and contained small rotating disks.

2000


A year later, in 2000, USB flash drives were introduced. The drives consisted of flash memory enclosed in a small form factor with a USB interface. Depending on the version of the USB interface used, the speed may vary. USB 1.1 is limited to 1.5 megabits per second, while USB 2.0 can handle 35 megabits per second , and USB 3.0 can handle 625 megabits per second. The first USB 3.1 drives of type C were announced in March 2015 and had a read / write speed of 530 megabits per second. Unlike floppy disks and optical disks, USB devices are more difficult to scratch, but at the same time they have the same capabilities for storing data, as well as for transferring and backing up files. Floppy and CD-ROM drives were quickly superseded by USB ports.

2005




In 2005, manufacturers of hard drives (HDDs) began shipping products using perpendicular magnetic recording, or PMR. Interestingly enough, this happened at the same time that the iPod Nano announced the use of flash memory instead of 1-inch hard drives in the iPod Mini.

A typical hard drive contains one or more hard drives coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic recording head flies just above a rotating disc. This is very similar to a traditional gramophone player, the only difference is that in the gramophone the needle is in physical contact with the record. As the discs rotate, the air in contact with them creates a light breeze. Just as the air on the wing of an airplane creates lift, air generates lift on the head of the aerodynamic surface of the disk head . The head quickly changes the magnetization of one magnetic region of the grains so that its magnetic pole points up or down, denoting 1 or 0.
 
The predecessor of the PMR was longitudinal magnetic recording, or LMR. The PMR recording density can exceed the LMR recording density by more than three times. The main difference between PMR and LMR is that the grain structure and magnetic orientation of the stored PMR media data is columnar rather than longitudinal. PMR has better thermal stability and improved signal to noise ratio (SNR) due to better grain separation and uniformity. It also features improved recordability thanks to stronger head fields and better magnetic alignment of media. Like LMR, the fundamental limitations of PMR are based on the thermal stability of the magnetically recorded data bits and the need to have enough SNR to read the recorded information.

2007


In 2007, the first 1 TB hard drive from Hitachi Global Storage Technologies was announced. The Hitachi Deskstar 7K1000 used five 3.5-inch 200-gigabyte plates and rotated at a speed of 7200 rpm. This is a major advantage compared to the world's first IBM RAMAC 350 hard drive, whose capacity was approximately 3.75 megabytes. Oh, how far we have come in 51 years! But wait, there is something else.

2009


In 2009, technical work began on the creation of non-volatile express memory, or NVMe. Non-volatile memory (NVM) is a type of memory that can store data permanently, unlike non-volatile memory, which needs constant power to save data. NVMe meets the need for a scalable host controller interface for peripheral components based on semiconductor drives supporting PCIe technology, hence the name NVMe. More than 90 companies were included in the project development working group. All this was based on the results of defining the specification of the non-volatile memory interface of the host controller (NVMHCIS). The best NVMe drives to date can handle about 3,500 megabytes per second when reading and 3,300 megabytes per second when writing. Write data byte j, from which we started,can be very fast compared to a couple of minutes of manually weaving a rope memory for Apollo Guidance Computer.

Present and future


Storage class memory


Now that we have traveled through time (ha!), Let's take a look at the current state of Storage Class Memory. SCM, like NVM, is robust, but SCM also provides performance superior to or comparable to main memory, as well as byte addressability.. SCM's goal is to solve some of today's cache problems, such as low density random access memory (SRAM). Using dynamic random access memory (DRAM) we can get a better density, but this is achieved through slower access. DRAM also suffers from the need for constant power to update memory. Let's figure it out a bit. Power supply is necessary, since the electric charge on the capacitors gradually leaks, that is, without interference, the data on the chip will soon be lost. To prevent such a leak, DRAM requires an external memory update circuit that periodically overwrites the data in the capacitors, restoring it to its original charge.

Phase-change memory (PCM)


We previously examined how the phase changes for CD-RW. PCM is similar. The material for the phase change is usually Ge-Sb-Te, also known as GST, which can exist in two different states: amorphous and crystalline. An amorphous state has a higher resistance denoting 0 than a crystalline state denoting 1. By assigning data values ​​to intermediate resistances, PCM can be used to store multiple states in the form of MLC .

Spin-transfer torque random access memory (STT-RAM)


STT-RAM consists of two ferromagnetic, permanent magnetic layers separated by a dielectric, that is, an insulator that can transmit electrical force without conducting. It stores data bits based on the difference in magnetic directions. One magnetic layer, called the reference, has a fixed magnetic direction, while the other magnetic layer, called free, has a magnetic direction, which is controlled by the transmitted current. For 1, the magnetization direction of two layers is aligned. For 0, both layers have opposite magnetic directions.

Resistive random access memory (ReRAM)
The ReRAM cell consists of two metal electrodes separated by an oxide layer of metal. A bit like the Masuoka flash memory design, where electrons penetrate the oxide layer and get stuck in a floating gate or vice versa. However, when using ReRAM, the state of the cell is determined based on the concentration of free oxygen in the metal oxide layer.

Although these technologies are promising, they still have disadvantages. PCM and STT-RAM have a high write delay. PCM latency is ten times higher than DRAM, while STT-RAM is ten times higher than SRAM. PCM and ReRAM have a limit on the length of the recording before a serious error occurs, which means that the memory element is stuck at a certain value .

In August 2015, Intel announced the release of Optane, its 3DXPoint-based product. Optane claims that performance is 1,000 times higher than NAND solid state drives, and the price is four to five times higher than flash memory. Optane is proof that SCM is not just an experimental technology. It will be interesting to observe the development of these technologies.

Hard disks (HDD)


Helium Hard Disk (HHDD)


A helium disk is a large capacity hard disk drive (HDD) filled with helium and hermetically sealed during production. Like other hard drives, as we said earlier, it looks like a turntable with a magnetic-coated spinning plate. Typical hard drives simply have air inside the cavity, however this air causes some resistance when the plates rotate.

Helium balls fly because helium is lighter than air. In fact, helium is 1/7 of the density of air, which reduces the braking force during rotation of the plates, causing a decrease in the amount of energy needed to rotate the disks. Nevertheless, this feature is secondary, the main distinguishing characteristic of helium was that it allows you to pack 7 plates in the same form factor, which usually contained only 5. If we recall the analogy with the wing of our aircraft, then this is an ideal analogue. Because helium reduces drag, turbulence is ruled out.

We also know that helium balls begin to drop in a few days, because helium leaves them. The same can be said of drives. Years passed before manufacturers were able to create a container that prevents helium from leaving the form factor over the entire life of the drive. Backblaze experimented and found that helium disks had an annual error of 1.03%, while standard errors had 1.06%. Of course, this difference is so small that it is quite difficult to draw a serious conclusion from it .

The helium-filled form factor can contain a hard drive encapsulated using the PMR that we talked about above, either microwave magnetic recording (MAMR) or magnetic heating recording (HAMR). Any magnetic storage technology can be combined with helium instead of air. In 2014, HGST combined two cutting-edge technologies in its 10TB helium hard drive using host-driven tile magnetic recording, or SMR (Shingled magnetic recording). Let's dwell on SMR a bit, and then consider MAMR and HAMR.

Tiled Magnetic Recording Technology


Earlier, we looked at perpendicular magnetic recording (PMR), which was the predecessor of SMR. Unlike PMR, SMR records new tracks that overlap part of a previously recorded magnetic track. This, in turn, makes the previous track narrower, providing a higher density of tracks. The name of the technology is due to the fact that the lap paths are very similar to the tiled paths on the roof.

SMR leads to a much more complicated writing process, as when recording on one track, the adjacent track is overwritten. This does not occur when the backing of the disk is empty and the data is consistent. But as soon as you record onto a series of tracks that already contain data, existing neighboring data is erased. If an adjacent track contains data, then it must be rewritten. This is pretty similar to the NAND flash we talked about earlier.

SMR devices hide this complexity by controlling the firmware, resulting in an interface similar to any other hard drive. On the other hand, host-driven SMR devices will not allow the use of these drives without special adaptation of applications and operating systems. The host must write to devices strictly sequentially. At the same time, device performance is 100% predictable. Seagate began shipping SMR discs in 2013, claiming that their density is 25% higher than PMR.

Microwave Magnetic Recording (MAMR)


Microwave-assisted magnetic recording (MAMR) is a magnetic memory technology that uses energy similar to HAMR (see below) An important part of MAMR is the Spin Torque Oscillator (STO) or “spin-spin generator”. STO itself is located in close proximity to the recording head. When current is applied to STO, a circular electromagnetic field with a frequency of 20–40 GHz is generated due to polarization of electron spins.

Under the influence of such a field, a resonance occurs in the ferromagnet used for MAMR, which leads to the precession of the magnetic moments of the domains in this field. In fact, the magnetic moment deviates from its axis and to change its direction (flip) the recording head needs significantly less energy.

Using MAMR technology allows one to take ferromagnetic substances with a greater coercive force, which means that it is possible to reduce the size of magnetic domains without fear of causing a superparamagnetic effect. The STO generator helps to reduce the size of the recording head, which makes it possible to record information on smaller magnetic domains, and therefore increases the recording density.

Western Digital, also known as WD, introduced this technology in 2017. Soon after, in 2018, Toshiba supported this technology. While WD and Toshiba are looking for MAMR technology, Seagate is betting on HAMR.

Thermomagnetic Recording (HAMR)


Heat-assisted magnetic recording (HAMR) is an energy-saving magnetic data storage technology that can significantly increase the amount of data that can be stored on a magnetic device, such as a hard disk, by using the heat provided by the laser to help write data to the surface hard disk underlay. Thanks to heating, the data bits are located on the disk substrate much closer to each other, which allows to increase the density and capacity of the data.

This technology is quite difficult to implement. 200 mW laser heats up quicklya tiny area of ​​up to 400 ° C before recording, while not interfering with or damaging the rest of the data on the disk. The process of heating, data recording and cooling should be completed in less than a nanosecond. To solve these problems, it was necessary to develop nanoscale surface plasmons, also known as surface-guided laser, instead of direct laser heating, as well as new types of glass plates and temperature-controlled coatings that can withstand fast spot heating without damaging the recording head or any nearby data, and various other technical problems that needed to be overcome.

Despite numerous skepticism, Seagate first demonstrated this technology in 2013. The first discs began shipping in 2018.

The end of the film, skip to the beginning!


We started in 1951 and are completing the article by looking into the future of storage technology. The data warehouse has changed a lot over time: from paper tape to metal and magnetic, rope memory, spinning disks, optical disks, flash memory and others. In the course of progress, faster, more compact, and more efficient storage devices have appeared.

If you compare NVMe with a 1951 UNISERVO metal tape, NVMe can read 486 111% more digits per second. If you compare NVMe with my childhood favorite, Zip disks, NVMe can read 213.623% more digits per second.

The only thing that remains true is the use of 0 and 1. The ways in which we do this vary greatly. I hope that the next time you record a CD-RW with songs for a friend or save your home video in Optical Disc Archive, you will think about how a non-reflective surface translates the value to 0, and the reflective one to 1. Or if you write mixtape on the cassette, remember that this is very closely related to the Datasette used in Commodore PET. Finally, do not forget to be kind and rewind.

Thanks to Robert Mustakki and Rick Alterra for the tidbits (I can't help myself) throughout the article!

What else is useful to read on the Cloud4Y blog

Easter eggs on topographic maps of Switzerland
Computer brands of the 90s, part 1
How the hacker’s mother got into jail and infected the boss’s computer
Diagnostics of network connections on the EDGE virtual router
How the bank “broke”

Subscribe to our Telegram channel so as not to miss another article! We write no more than twice a week and only on business. We also remind you that Cloud4Y can provide safe and reliable remote access to business applications and information necessary to ensure business continuity. Remote work is an additional barrier to the spread of coronavirus. Details are from our managers on the site .

All Articles