How not to protect your systems in the cloud

Often, when I tell someone about vulnerability, they look at me like a madman with a sign “Repent, for the end of the world is near”!

Now everyone is running in a panic and trying to organize a "remote", making simple mistakes, collecting all possible rakes, so I decided to share some dramatic stories with the participation of gypsy hackers, open CVEs and professional, but a bit naive devops. Of course, I had to omit some details or even intentionally distort, so as not to upset the customers. For the most part, this is not a practice from the current work at Technoserv, but let this post be a small memo on how to not do it, even if you really want to.

How to fence the server


There was a server in the data center. Such an old-school, iron, without any fancy containers and virtualization. Several generations of employees ago, one of the developers configured it “temporarily, so that only a few large files for the project are accepted.” At the same time, the company really cared about information security, but, as often happens, colleagues from IS went to meet the needs of the business and agreed on a temporary option with full access to the Internet.

Fortunately, the server was located in the gray zone of the DMZ and could not connect directly to the critical services of the internal circuit. The 22nd port was set out, and inside, they just added a few local users with individual passwords to log in via ssh / sftp. Access by certificates was considered too inconvenient. Then, developers from another project came running and asked to help automate regular uploads from the counterparty’s server, because “you have a convenient server with a coordinated access to the network”. Then again.

The result was an extremely cheerful situation when the server, it seemed like a temporary one, did not receive any updates, but a bunch of business processes were already tied to it and several terabytes of data were pumped per month.

We decided to monitor it, since the server is so important, and the CPU’s graphs immediately have a shelf of 100%. They got it right to sort it out: and there a bunch of rand processes on behalf of the suspicious user test and a bunch of memory consumed by them, and the logs have continuous brute force from around the world.

Before putting the server into oblivion, they poked a stick at the processes: lsof immediately showed the deleted, but unclosed files that were still hanging in RAM. Unfortunately, it was not possible to understand exactly what the attacker was doing, but the behavior was more likely to be like a person than working out a script. When examining a script in RAM, inserts like scanez clasa (scanning a class) in Romanian were very pleased:

#!/bin/bash
while["1"];do
class="
#168
"
classb="`seq 1 255`"
classb2=($classb)
num_classb=${#classb2[*]}
b=${classb2[$((RANDOM%num_classb))]}
classc="`seq 1 255`"
classc2=($classc)
num_classc=${#classc2[*]}
c=${classc2[$((RANDOM%num_classc))]}
classes=($class)
num_class=${#classes[*]}
a=${classes[$((RANDOM%num_class))]}
echo "scanez clasa ${a}.${b}"
./a ${a}.${c}
done

As far as I know, nothing serious has leaked (or they didn’t tell me about it) and the attacker couldn’t get into the internal perimeter, but since then the company has been talking about gypsy horse thief hackers.

rules


  1. Do not allow temporary convenient, but unsafe solutions to be fixed in the infrastructure. Yes, I know that they are inconsistent, but only endure quarantine, but just agree when you will remove them. After how many days or hours. And clean up without delay. Well, at least say it to others, please. After all, the exposed service begins to break in an average of 20 minutes.
  2. Do not show ssh and pure RDP to the outside, it is better to provide access via VPN or web tunneling services. I don’t know why, but for them, people are more responsible for the set of allowed accounts and their passwords.
  3. ssh — . . ssh, .
  4. - — - fail2ban, , - . «» «». , . , , , — .
  5. , : « ?» .
  6. . . , , .

IP – , firewall !


I perfectly understand that NAT was a necessary crutch that spoiled the life of developers and did not allow implementing options for quickly connecting nodes to each other. Nevertheless, its side effect of hiding the internal structure of the network is very useful in terms of complicating a regular automated attack. And yes, I understand very well that the most correct option is to use a whitelisted firewall to explicitly allow only the necessary connections. The problem is that in the world of continuous aggail for such trifles as hard configuration of a firewall or God forbid SELinux never has the time and budget. In general, they interfere with debugging, reduce the critical indicator of time-to-market and do not allow developers and devops to live in peace.

The situation became even more interesting when the cloud infrastructure, deployed automatically on demand, became the industry standard. The fact is that most cloud solutions assume that the protection of a heap of raised containers and virtual machines lies with the end user. As a rule, they provide you with the infrastructure, they allocate white IP addresses and that’s all, in essence, they provide a set of capabilities, not ready-made solutions. By default, everything is closed, but it’s inconvenient and so prevents development. Therefore, let's open everything and we will calmly test it, but on production we will do it right.

Often this leads to funny cases. I watched one slightly pirated copy of the famous MMORPG server. Clans, donat, continuous discussions of each other's mothers and other joys. Everyone wondered why some characters pumped so fast, and generally omnipotent. I ran nmap through the range of addresses closest to the game server.

It turned out that the entire infrastructure, including the frontend, backend and database, simply stuck open ports to the outside world. And what is the most likely password for user sa if the database is accessible throughout the Internet? That's right, sa too! As a result, the most difficult thing is to figure out the table structure.

A similar story was with one customer who opened up remote access to the admin panel for a home machine while working from home for a while. Naturally, the admin panel was without authorization, as it was considered a secure internal resource. And the customer did not bother and just opened the port for the entire Internet immediately.

And ELK servers open to the world pop up every week. What just do not find in their contents. Starting from personal data, ending with credit card numbers.

rules


  1. Firewall must be whitelisted. In no case should the backend be accessible from the outside. And do not hesitate to ask contractors and remote employees from which IP they will connect. In the end, a dedicated IP costs about 150 r / month. This is a feasible waste for the opportunity to work from home.
  2. Always use HTTPS and full authentication, even if they are “just” test machines.
  3. Strictly separate test and productive environments. Never, never at all, use the same accounts in both environments.

Samba


Especially often I’m happy with the Samba server, which is traditionally used to organize shared access to resources. Why not set up guest access for colleagues from the neighboring department to conveniently exchange files?

In a small company, everything went well until there were no more departments. After some time, it was required to configure access to remote branches. And a completely “reasonable” solution was to open access to samba from the outside. Well, everyone has their own passwords, what could go wrong? Nobody remembered about guest access until it turned out that a tangible part of the HDD was clogged with someone else's data. Automatic scanners quickly found a free file storage, and the HDD began to clog quickly with someone else's encrypted archives. And in one of the directories, we found during the audit a collection of selected films for adults with the participation of actresses 60+ (and it was lucky that not 18-, otherwise it would have flown from law enforcement agencies).

findings


  1. Never share storage without a VPN.
  2. samba ftp-. , .
  3. , , - .

,


I had one customer who did not understand at all why he needed to spend additional amounts on backing up if everything worked for him. This is expensive. And he’s fine tuned. As a result, employees who open and run everything in a row that they are sent to the mail safely caught an encryption virus. They lost the base 1C and were able to restore it only thanks to the paper archive and one contractor who once copied the base to himself.

I talked with the leader, explained the key points that need to be changed in the company in order to eliminate the risks of losing the base. He nodded throughout the conversation and ended up with a wonderful phrase: “The projectile does not hit the same hole twice. Now there’s nothing to be afraid of. ” He again refused backups and naturally lost all the data in the same scenario six months later.

rules


  1. , (- ). , .
  2. . - , .
  3. . . , helpdesk.

!


In my experience, small and medium-sized companies usually start with fully manual user account management. I mean, the new sales manager stomps into the den of bearded admins, where he is solemnly given a login, password and access. All this works well until the company begins to grow.

Here are the people who sold composite tanks. They recently changed their leadership and it was decided to conduct a full security audit. They even brought us on an excursion to show production. The spectacle is very impressive: in a huge workshop, large workpieces were rotated, on which fiberglass was wound, while workers ran with a bucket of epoxy resin, carefully smearing it over the workpiece.

In a separate building they had an administrative wing, where we buried ourselves directly in the organization of the information part of production. At first glance, they had organized a fairly logical scheme, when access to the customer database was provided only through an internal account in AD with the approval of the head. When a person quit, he ran through the checklist, handed over equipment, cards and the account was deactivated. All this was done manually, as funds for full Identity management were not allocated.

In the audit process, we found that they had implemented a self-written portal many years ago so that salespeople could remotely receive the data they need about customers. They even began to migrate their infrastructure to the cloud, but in the end they stopped halfway due to some internal difficulties. Moreover, AD could not be integrated there and the accounts were deleted exactly the same on the checklist upon dismissal. Everything seems to be normal, but in the logs we found an active account of a certain Vasily, who was fired several years ago and now has successfully worked in a competing company. Moreover, judging by the logs, he unloaded almost the entire client base at least once a month.

The account was immediately blocked and they began to watch how a person was able to circumvent internal regulations. It turned out that Vasily first gained access to the portal as a sales manager, and then transferred to a managerial position directly in the workshop, after which he quit. But the checklist for dismissal in the workshop is completely different and the account deactivation from the portal was not listed there. Although, it would seem, to make a general checklist for access control in all systems and the problem could be avoided.

rules


  1. If you have more than 100 employees, consider introducing a full-fledged IDM system.
  2. Dismiss data not from the bypass sheet, but directly from the personnel department. Layoffs are different and a workaround may not bring you.
  3. — . . , « , », , ( , , . .).
  4. « », - .
  5. . .
  6. . , — . .

?


For some reason, information security is traditionally perceived as unfriendly individuals who run after everyone with their boring regulations and interfere with their work. In fact, for all the grotesque examples above, they are quite often found in real cases, and it’s the security guards and paranoid admins that help to avoid them.

Just try to find a common language. First of all, IS employees themselves should not become watchmen who are engaged only in making the brain a regular password policy without explanation. The right approach is to meet with developers and devs, plunge into their problems. Then develop the right compromise option, which will not only not shine with passwordless access to the outside world, but will be convenient to use.

And devops want to be sometimes at least a little paranoid.
The text was prepared by Vladimir Chikin, Head of Information Technology at Technoserv Cloud .

All Articles