HPE Remote Work Solutions

I will tell you a story today. The history of the evolution of computer technology and the emergence of remote jobs from ancient times to the present day.

IT development


The main thing that can be learned from the history of IT is ...



Of course, that IT is developing in a spiral. The same solutions and concepts that were discarded dozens of years ago take on a new meaning and successfully begin to work in new conditions, with new tasks and new capacities. In this IT is no different from any other area of ​​human knowledge and the history of the Earth as a whole.


Once upon a time when computers were big


“I think there is a market for about five computers in the world,” IBM CEO Thomas Watson in 1943.
Early computer technology was great. No, wrong, the early technique was monstrous, cyclopean. A fully computing machine occupied an area comparable to a sports hall, and cost completely unrealistic money. As an example of components, you can bring the RAM module on ferrite rings (1964).



This module has a size of 11 cm * 11 cm, and a capacity of 512 bytes (4096 bits). The cabinet, full of these modules, hardly had the capacity of the ancient 3.5 ”floppy disk today (1.44 MB = 2950 modules), at the same time it consumed very noticeable electrical power and heated itself like a steam locomotive.

It is with this huge size that the English-language name for debugging program code is “debugging”. One of the first programmers in history, Grace Hopper (yes, a woman), a naval officer, made an entry in the action log in 1945 after investigating a problem with the program.



Since the moth (moth) is generally a bug (insect), all further problems and solutions were reported to the authorities as “debugging” (literally malfunctioning), the name of the bug was firmly fixed to the program failure and the error in the code, and debugging became a debug .

With the development of electronics and semiconductor electronics in particular, the physical dimensions of machines began to decrease, and computing power, on the contrary, increased. But even in this case, it was impossible to deliver each person a personal computer.

“There is no reason anyone wants to keep a computer at home” - Ken Olsen, founder of DEC, 1977.

In the 70s, the term mini-computer appears. I remember that when I first read this term many years ago, I thought of something like a netbook, almost a handheld. I could not be further from the truth.



Mini - it is only compared to the huge engine rooms, but it is still a few cabinets with equipment worth hundreds of thousands and millions of dollars. However, the computing power has already grown so much that it was not always 100% loaded, and at the same time, computers began to be available to students and university professors.

And then he came!



Few think about Latin roots in the English language, but it was he who brought us remote access, as we know it now. Terminus (lat) - end, border, goal. The goal of the T800 Terminator was to end the life of John Connor. We also know that transport stations, at which passengers are embarked, disembarked or loaded and unloaded, are called terminals - the ultimate goals of routes.

Accordingly, the concept of terminal access appeared, and you can see the most famous terminal in the world, still living in our hearts.



DEC VT100 is called a terminal because it terminates the data line. It has virtually zero computing power, and its only task is to display the information received from a large machine and transfer keyboard input to the machine. Although the VT100 physically died long ago, we still use them fully.



Our days


“Our days” I would begin to count from the beginning of the 80s, from the moment of the appearance of the first processors available to a wide circle with any significant computing power. It is traditionally believed that the main processor of the era was Intel 8088 (x86 family) as the founder of the victorious architecture. What is the fundamental difference with the concept of the 70s?

For the first time there is a tendency to transfer information processing from the center to the periphery. Not all tasks require insane (compared to weak x86) mainframe or even mini-computer capacities. Intel does not stand still, in the 90s it releases the Pentium family, which has become truly the first mass home in Russia. These processors are already capable of a lot, not only write a letter - but also multimedia, and work with small databases. In fact, for small businesses, the need for servers is completely eliminated - everything can be done on the periphery, on client machines. Every year, processors are becoming more powerful, and the difference between servers and staff is less and less in terms of computing power, often remaining only in power backup, hot-swap support and special rack-mount enclosures.

If we compare modern “funny” client processors for administrators of heavy servers in Intel in the 90s with supercomputers of the past, then it becomes a little uneasy at all.

Let’s take a look at the old man, practically my peer. Cray X-MP / 24 1984.



This machine was in the top 1984 supercomputers with 2 105 MHz processors with a peak processing power of 400 MFlops (millions of floating point operations). Specifically, the machine shown in the photo was in the cryptography laboratory of the NSA, and was involved in breaking ciphers. If you convert $ 15 million in 1984 into 2020 dollars, then the cost will be 37.4 million, or $ 93,500 / MFlops.



In the machine on which I am writing these lines, there is a Core i5-7400 processor of 2017, which is not new at all, and even in the year of its release it was the youngest 4-core of all mid-range desktop processors. 4 cores of 3.0 GHz base frequency (3.5 with Turbo Boost) and doubling of HyperThreading streams give from 19 to 47 GFlops of power according to different tests at a price of 16 thousand rubles per processor. If you assemble the whole machine, then you can take its cost for $ 750 (at prices and rates on March 1, 2020).

In the end, we get the superiority of the quite average desktop processor of our days by 50-120 times over the supercomputer from the top 10 of the foreseeable past, and the drop in the unit cost of MFlops becomes absolutely monstrous 93500/25 = 3700 times.

Why do we still need servers and centralization of computing at similar capacities on the periphery - absolutely incomprehensible!

Backward jump - the spiral made a revolution


Diskless stations


The first signal that the transfer of computing to the periphery would not be final was the advent of diskless workstation technology. With a significant distribution of workstations across the enterprise, and especially in contaminated rooms, the issue of managing and supporting these stations is very tough.



The concept of “corridor time” appears - the percentage of time that a technical support employee is in the corridor, on the way to the employee with a problem. This time is paid, but completely unproductive. Far from the last role, and especially in contaminated rooms, were the failures of hard drives. Let's remove the disk from the workstation, and do the rest over the network, including downloading. The network adapter receives, in addition to the address from the DHCP server, also additional information - the address of the TFTP server (simplified file service) and the name of the boot image, loads it into RAM and starts the machine.



In addition to fewer breakdowns and reduced corridor time, the machine can now not be debugged on the spot, but simply bring a new one and pick up the old one for diagnostics at the equipped workstation. But that is not all!

A diskless station becomes much safer - if suddenly someone breaks into the room and takes out all the computers, this is just a loss of equipment. No data is stored on diskless stations.

Remember this moment, IS begins to play an increasingly important role after the “careless childhood” of information technology. And the terrible and important 3 letters invade IT more and more - GRC (Governance, Risk, Compliance), or in Russian “Manageability, Risk, Compliance”.



Terminal servers


The ubiquity of more and more powerful personnel on the periphery was far ahead of the development of shared networks. Classic client-server applications for the 90s-early 00s did not work very well on a thin channel, if the data exchange amounted to any significant values. This was especially difficult for remote offices that connected via modem and telephone line, which also periodically hung or broke. And ... The

spiral made a round and was again in terminal mode with the concept of terminal servers.



In fact, we are back to 70m with their zero customers and centralized computing power. It quickly became apparent that, in addition to a purely economic background with channels, terminal access provides tremendous opportunities for organizing secure access from the outside, including working from home for employees, or extremely limited and controlled access to contractors from untrusted networks and untrusted / uncontrolled devices.

However, terminal servers, with all their pluses and progressiveness, also had a number of minuses - low flexibility, the problem of a noisy neighbor, strictly server Windows, and so on.

Birth Proto VDI




True, in the early to mid-00x, the industrial virtualization of the x86 platform was already on the scene. And someone voiced the idea that was simply floating in the air: instead of centralizing all the clients on the server terminal farms, let's give everyone his personal VM with client Windows and even administrator access?

Rejection of fat customers


In parallel with the virtualization of sessions and the OS, an approach was developed related to facilitating client functions at the application level.

The logic behind this was quite simple, because personal laptops were still far from everyone, the Internet was not exactly the same for many, and many could connect only from Internet cafes with very limited, to put it mildly, rights. In fact, all that could be launched was a browser. The browser has become an indispensable attribute of the OS, the Internet has firmly entered our lives.

In other words, in parallel there was a trend to transfer logic from the client to the center in the form of web applications, for access to which only the simplest client, the Internet and a browser are needed.
And we were not just in the same place where we started - with zero clients and central servers. We came there in several independent ways.



Virtual desktop infrastructure


Broker


In 2007, the leader in the industrial virtualization market, VMware, released the first version of its VDM (Virtual Desktop Manager) product, which was virtually the first in the emerging virtual desktop market. Of course, we did not have to wait long for a response from the leader of the Citrix terminal servers, and in 2008, with the acquisition of XenSource, XenDesktop appeared. Of course, there were other vendors with their proposals, but we will not go too deep into history, moving away from the concept.

And still the concept is preserved. A key component of VDI is the connection broker.
This is the heart of virtual desktop infrastructure.

The broker is responsible for the most important processes of VDI:

  • Defines available resources for the connected client (machines / sessions);
  • Balances, if necessary, clients on pools of machines / sessions;
  • Forwards the client to the selected resource.

Today, the client (terminal) for VDI can be virtually everything that has a screen - a laptop, smartphone, tablet, kiosk, thin or zero client. And the response part, the one that performs the productive load - a terminal server session, a physical machine, a virtual machine. Modern mature VDI products are tightly integrated with the virtual infrastructure and independently manage it automatically, deploying or dialing, removing the already unnecessary virtual machines.

A little aloof, but for some customers, the extremely important VDI technology is the support of hardware acceleration of 3D graphics for the work of designers or designers.

Protocol


The second extremely important part of a mature VDI solution is the virtual resource access protocol. If we are talking about working inside a corporate LAN with an excellent reliable network of 1 Gbps to the workstation and a delay of 1 ms, then you can take almost anyone and not think at all.

You need to think when the connection is over an uncontrolled network, and the quality of this network can be absolutely anything, up to speeds of tens of kilobits and unpredictable delays. Those are just for organizing real remote work, from summer residences, from home, from airports and eateries.

Terminal Servers vs Client VMs


With the advent of VDI, it seemed like time to say goodbye to terminal servers. Why are they needed if everyone has their own personal VM?

However, from the point of view of a pure economy, it turned out that for typical mass jobs that are identical to nausea, there is nothing more efficient than terminal servers in terms of price / session ratio. For all its merits, the “1 user = 1 VM” approach consumes significantly more resources for virtual hardware and a full-fledged OS, which worsens the economy in typical workstations.

In the case of jobs for top managers, non-standard and loaded jobs, the need to have high rights (up to the administrator), the dedicated VM for the user has the advantage. Within this VM, you can allocate resources individually, issue rights at any level, and balance VMs between virtualization hosts under high load.

VDI and Economics


For years I have been hearing the same question - but how, is VDI cheaper than just handing out laptops to everyone? And over the years I have to answer exactly the same thing: in the case of ordinary office employees, VDI is not cheaper if we take into account the net costs of providing equipment. Like it or not, laptops are getting cheaper, but servers, storage and system software cost quite a lot of money. If it’s time for you to update the park and you are thinking of saving from VDI, no, don’t save.

I quoted the terrible three letters GRC above - and so, VDI is about GRC. It's about risk management, it's about the security and convenience of controlled access to data. And all this usually costs quite a lot of money for introducing heterogeneous equipment on a pile. With VDI, control is simplified, safety is enhanced, and hair becomes soft and silky.

HPE



iLO


HPE is far from a newcomer to the remote management of server infrastructure, is it a joke - in March the legendary iLO (Integrated Lights Out) turned 18. Remembering his admin times in the 00s, he personally could not get enough of it. Initial rack mounting and cabling were all that needed to be done in a noisy and cold data center. All other configuration, including filling the OS, could already be done from the workplace, two monitors and with a mug of hot coffee. And this is 13 years ago!



Today, HPE servers are not without reason an undeniable long-term quality standard - and the gold standard of the remote control system - iLO, plays an important role in this.



I would like to separately note the actions of HPE in maintaining the control of humanity over the coronavirus. HPE announcedthat until the end of 2020 (at least) the iLO Advanced license is available to everyone for free.

Infosight


If you have more than 10 servers in the infrastructure, and the administrator is not bored, then the HPE Infosight cloud system based on artificial intelligence is an excellent addition to the standard monitoring tools. The system not only monitors the state and builds charts, but also independently recommends further actions based on the current situation and trends.





Be smart, be like Otkritie Bank , try Infosight!

Oneview


Last but not least, I want to mention the HPE OneView - a whole product portfolio with huge capabilities for monitoring and managing the entire infrastructure. And all this without getting up from the desktop, which maybe you are in the current situation at all in the country.



SHD is not shaky too!


Of course, all storage systems are remotely controlled and monitored - this was many years ago. Therefore, I want to talk today about something else, namely metro clusters.

Metro clusters are not a novelty in the market at all, but because of this they are still not very popular - the inertia of thinking and first impressions affect it. Of course, 10 years ago they already were, but they stood like a cast-iron bridge. The years that have passed since the first metro clusters have changed the industry and the availability of technology to the general public.

I remember projects where parts of the storage system were specially distributed - separately for supercritical services to the metro cluster, separately for synchronous replication (several times cheaper).

In fact, in 2020 the metro cluster does not cost you anything if you are able to organize two sites and channels. But channels for synchronous replication require exactly the same channels as for metro clusters. Software licensing has been going on in packages for a long time - and synchronous replication is immediately complete with a metro cluster, and the only thing that saves the life of unidirectional replication is the need to organize an extended L2 network. And even then, L2 over L3 is already sweeping the country.



So what is the fundamental difference between synchronous replication and the metro cluster in terms of remote work?

Everything is very simple. The metro cluster itself works automatically, always, almost instantly.

What does the process of switching load on synchronous replication on the infrastructure look like at least a few hundred VMs?

  1. An alarm is being received.
  2. The shift on duty analyzes the situation - you can safely lay from 10 to 30 minutes only to receive a signal and make a decision.
  3. If the duty engineers do not have the authority to start the switching independently, it is still boldly 30 minutes to contact the authorized person and formal confirmation of the switching start.
  4. Pressing the Big Red Button.
  5. 10-15 minutes for timeouts and volume remounting, re-registration of VMs.
  6. 30 minutes to change IP addressing - optimistic assessment.
  7. And finally, the start of the VM and the launch of productive services.

Total RTO (time to recovery of business processes) can be safely estimated at 4 hours.

Compare with the situation on the metro cluster.

  1. SHD understands that the connection with the shoulder of the metro cluster is lost - 15-30 seconds.
  2. Virtualization hosts understand that the first data center is lost - 15-30 seconds (simultaneously with n 1).
  3. Automatic restart from half to a third of VMs in the second data center - 10-15 minutes before loading services.
  4. Around this time, the shift on duty understands what happened.

Total: RTO = 0 for individual services, 10-15 minutes in the general case.

Why restart only half to a third of the VM? See what's the matter:

  1. You do everything smartly, and turn on automatic balancing of the VM. As a result, on average, only half of the VMs are executed in one of the data centers. After all, the whole point of the metro cluster is to minimize downtime, and therefore it is in your interests to minimize the number of VMs under attack.
  2. Some services can be clustered at the application level, spreading across different VMs. Accordingly, these paired VMs are nailed one by one, or tied with a ribbon to different data centers, so that the service does not wait for the VM to restart at all in the event of an accident.

With a well-built infrastructure with extended metro clusters, business users work with minimal delays from anywhere, even in the event of an accident at the data center level. In the worst case, the delay will be time for one cup of coffee.

And, of course, the metro clusters work perfectly both on the HPE 3Par Valinor and the brand new Primera!



Remote Workstation Infrastructure


Terminal servers


For terminal servers, you do not need to invent anything new, for many years HPE has been supplying some of the best servers in the world for them. Ageless classics - DL360 (1U) or DL380 (2U) or for fans of AMD - DL385. Of course, there are blade servers, both the classic C7000 and the new Synergy composable platform.



For every taste, every color, maximum sessions per server!

“Classic” VDI + HPE Simplivity


In this case, by “classic VDI” I mean the concept of 1 user = 1 VM with client Windows. And of course, there is no closer and dearer VDI load for hyperconverged systems, especially with deduplication and compression.



Here, HPE can offer both its own hyper-converged Simplivity platform and servers / certified nodes for partner solutions, such as VSAN Ready Nodes for building VDI on VMware VSAN infrastructure.

Let's talk a little more about our own Simplivity solution. Simplicity has been put at the forefront, as the name gently hints at us, is simplicity (simple). Easy to deploy, easy to manage, easy to scale.

Hyperconverged systems today are one of the hottest topics in IT, and the number of vendors of different levels is about 40. According to the Gartner magic square, HPE is globally in Tor5, and is one of the leaders in the world - they understand where the industry is developing and is capable of understanding to embody in iron.

Architecturally, Simplivity is a classic hyperconverged system with controller virtual machines, which means it can support various hypervisors, unlike systems integrated into a hypervisor. Indeed, as of April 2020, VMware vSphere and Microsoft Hyper-V are supported, and plans to support KVM are announced. The key feature of Simplivity since its introduction on the market has been hardware acceleration of compression and deduplication using a special accelerator card.



It should be noted that deduplication compression is global and always on; those are not an optional feature, but the solution architecture.



HPE of course is somewhat disingenuous, claiming a 100: 1 efficiency, calculated in a special way, but the space utilization efficiency is really very high. Just the number 100: 1 is painfully beautiful. Let's see how technically implemented Simplivity to show such numbers.

Snapshot. Snapshots (snapshots) - 100% correctly implemented as RoW (Redirect-on-Write), and therefore occur instantly and do not give a penalty to performance. What for example differ from some other systems. Why do we need local snapshots without fines? Yes, it’s very simple, to reduce the RPO from 24 hours (the average RPO for backup) to tens or even units of minutes.

Backup . A snapshot differs from backup only in how it is perceived by the virtual machine control system. If everything else is deleted when the machine is deleted, it means it was a snapshot. If left, then backup (backup). Thus, any snapshot can be considered a full backup if it is marked in the system and not deleted.

Of course, many will object - what kind of backup is it if it is stored on the same system? And here is a very simple answer in the form of a counter-question: tell me, do you have a formal threat model that establishes the rules for storing a backup? This is an absolutely honest backup against deleting a file inside a VM, this is a backup against deleting a VM itself. If it is necessary to store the backup exclusively on a stand-alone system, there is a choice: replication of this snapshot to the second Simplivity cluster or to HPE StoreOnce.



And it is here that it turns out that such an architecture is just perfect for any kind of VDI. After all, VDI is hundreds or even thousands of extremely similar machines with the same OS, with the same applications. Global deduplication is going through all this and will not even squeeze 100: 1, but much better. Expand 1000 VMs from a single template? Generally not a problem, these machines will take longer to register with vCenter than clone.

Especially for users with special performance requirements, and for those who need 3D accelerators, the Simplivity G line was created.



In this series, a hardware deduplication accelerator is not used, and therefore the number of disks per node is reduced so that the controller can cope programmatically. This frees up PCIe slots for any other accelerators. Also doubled the amount of available memory per node up to 3TB for the most demanding loads.



Simplivity is ideal for organizing geographically distributed VDI infrastructures with data replication to a central data center.



Such a VDI architecture (and indeed not only VDI) is especially interesting in the context of Russian realities - huge distances (and therefore delays) and far from ideal channels. Regional centers are created (or even just 1-2 Simplivity nodes to a completely remote office), where local users connect via fast channels, full control and management from the center is preserved, and only a small amount of real, valuable, not garbage, is replicated to the center data.

Of course, Simplivity is fully connected to OneView and InfoSight.

Thin and Zero Clients


Thin clients are specialized solutions for use exclusively as terminals. Since there is virtually no load on the client other than channel maintenance and video decoding - a processor with passive cooling is almost always worth it, a small boot disk is just for starting a special embedded OS, but that's all. There’s practically nothing to break into it, and stealing is useless. The cost is low and no data is stored in it.

There is a special category of thin clients, the so-called zero clients. Their main difference from thin ones is the lack of even a general-purpose embedded OS, and work exclusively with a microchip with firmware. Often they install special hardware accelerators for decoding the video stream in terminal protocols such as PCoIP or HDX.

Despite the division of the large Hewlett Packard into separate HPEs and HPs, HP thin clients cannot be ignored.

The choice is wide, for every taste and need - down to multi-monitor workstations with hardware acceleration of the video stream.



HPE service for your remote work


And last but not least, I want to mention the HPE service. It would take too long to list all levels of HPE service and its capabilities, but at least there is one extremely important proposal in the conditions of remote work. Namely - a service engineer from HPE / authorized service center. You continue to work remotely, from your favorite summer residence, listening to bumblebees, while the HPE bee, having arrived at the data center, replaces disks or a failed power supply in your servers.

HPE CallHome


In today's environment, with limited movement, the Call Home function becomes more relevant than ever. Any HPE system with this feature can independently report a hardware or software failure to the HPE Support Center. And it is likely that a replacement part and / or service engineer will arrive to you long before you notice problems and problems with productive services.

Personally, I highly recommend including this feature.

All Articles