Tips and sources for creating serverless applications


Although serverless technologies have been gaining popularity rapidly in recent years, there are still many misconceptions and concerns associated with them. Vendor dependency, tools, expense management, cold start, monitoring, and the development life cycle - all these topics are actively discussed when it comes to serverless technologies. In this article, we will look at some of the topics mentioned, and also share tips and links to useful sources of information, with which beginners can create powerful, flexible and economical serverless applications.

Misconceptions regarding Serverless Technologies


Many believe that serverlessness and non-server data processing ( Functions as a Service , FaaS) are almost the same thing. So, the difference is not too big and it is worth introducing a new product. Although AWS Lambda was one of the stars in the heyday of serverless technologies and one of the most popular elements of serverless architecture, this architecture is more than FaaS.

The basic principle of serverless technologies is that you do not need to worry about managing and scaling the infrastructure, you only pay for what you use. Many services are suitable for these criteria - AWS DynamoDB, S3, SNS or SQS, Graphcool, Auth0, Now, Netlify, Firebase and many others. In general, serverlessness means using all the capabilities of cloud computing without the need to manage the infrastructure and its optimization for the sake of scaling. It also means that infrastructure-level security is no longer your problem, but a huge advantage, given the difficulty and complexity of complying with security standards. Finally, you do not need to buy the infrastructure provided to you for use.

Serverlessness can be considered a "state of mind": a certain mentality in the design of solutions. Avoid approaches requiring maintenance of any infrastructure. With a serverless approach, we spend time solving problems that directly affect the project and bring benefits to our users: we create sustainable business logic, develop user interfaces and develop adaptive and reliable APIs.

For example, if you can avoid managing and maintaining the free text search platform, then that’s what we’ll do. This approach to building applications can greatly accelerate the launch of the product on the market, because you no longer need to think about managing complex infrastructure. Get rid of the responsibilities and costs of managing your infrastructure and focus on creating the applications and services your customers need. Patrick Debois called this approach 'servicefull', this term is adopted in the serverless community. Functions should be considered as a connecting link for services in the form of deployable modules (instead of deploying an entire library or web application). This provides an incredible granularity in the management of deployment and changes in the application. If you cannot deploy functions in this way, then this may indicate that functions perform too many tasks and should be refactored.

Some are confused by vendor dependency in the development of cloud applications. The same thing with serverless technologies, and this is hardly a consequence of confusion. In our experience, creating serverless applications on AWS in combination with AWS Lambda's ability to integrate other AWS services - all this partly forms the advantages of serverless architectures. This is a good example of synergy when the result of combining is more than just the sum of the terms. Trying to avoid vendor dependency, you may run into even bigger problems. When working with containers, it’s easier to manage your own level of abstraction between cloud providers. But when it comes to serverless solutions, efforts will not pay off, especially if economic efficiency is taken into account from the very beginning. Be sure to find out how vendors provide services.Some specialized services depend on integration points with other vendors, and out of the box they can provide plug-and-play connectivity. It is easier to provide a Lambda call from the gateway API endpoint than to proxy the request to some container or EC2 instance. Graphcool provides easy configuration with Auth0, which is easier than using third-party authentication.

Choosing the right vendor for your serverless application is an architectural level solution. When you create an application, you do not expect that one day you will return to server management. Choosing a cloud vendor is no different than choosing to use containers or a database, or even a programming language.

Consider:

  • What services do you need and why.
  • What services are provided by cloud providers and how you can combine them using the selected FaaS solution.
  • What programming languages ​​are supported (with dynamic or static typing, compiled or interpreted, what are the benchmarks, what is the performance at a cold start, what is the open source ecosystem, etc.).
  • What are your security requirements (SLA, 2FA, OAuth, HTTPS, SSL, etc.).
  • How to manage your CI / CD and software development cycles.
  • What infrastructure-as-code class solutions can you take advantage of?

If you expand an existing application and add serverless functions incrementally, this may somewhat limit the available options. However, almost all serverless technologies provide some kind of API (via REST or message queues) that allow you to create extensions regardless of the application kernel and with simple integration. Look for services with clear APIs, good documentation and a strong community, and you won’t be mistaken. Ease of integration can often be a key metric, and this is probably one of the main reasons for the success of AWS since the release of Lambda in 2015.

When serverlessness is useful


Serverless technologies can be applied almost everywhere. However, their advantages are not limited to only one application. The cloud entry threshold is so low today thanks to serverless technologies. If developers have an idea, but they don’t know how to manage the cloud infrastructure and optimize costs, then they don’t need to look for some kind of engineer for this. If a startup wants to create a platform, but fears that costs might get out of hand, it can easily turn to serverless solutions.

Due to cost savings and ease of scaling, serverless solutions are equally applicable to both internal systems and external ones, up to a web application with a multi-million audience. Accounts are measured not in euros, but in cents. Renting the simplest instance of AWS EC2 (t1.micro) for a month will cost € 15, even if you do not do anything with it (who has never forgotten to turn it off ?!). In comparison, in order to achieve such an already level of expenditure over the same period of time, you will need to run Lambda 512 MB in size for 1 second about 3 million times. And if you do not use this function, then pay nothing.

Since serverless technology depends primarily on events, it is fairly easy to add serverless infrastructure to older systems. For example, using AWS S3, Lambda and Kinesis, you can create an analytic service for an old retail system that can receive data through the API.

Most serverless platforms support different languages. Most often these are Python, JavaScript, C #, Java and Go. Usually in all languages ​​there are no restrictions on the use of libraries, so you can use your favorite open source libraries. However, it is advisable not to abuse the dependencies so that your functions perform optimally and do not nullify the benefits of the enormous scalability of your serverless applications. The more packages you need to load into the container, the longer the cold start will take.

A cold start is when you first need to initialize the container, runtime, and error handler before using them. Because of this, the delay in the execution of functions can reach 3 seconds, and this is not the best option for impatient users. However, cold starts occur on first use after a few minutes of downtime. So many people consider this a minor inconvenience that can be circumvented by regularly pinging a function to keep it idling. Or even ignore this aspect.

Although AWS has released Serverless Aurora Serverless SQL DatabaseHowever, SQL databases are not ideal for such an application, because when performing transactions, they depend on connections, which can quickly become a bottleneck with a lot of traffic on AWS Lambda. Yes, developers are constantly improving Serverless Aurora, and you should experiment with it, but today NoSQL solutions like DynamoDB are much better for serverless systems . However, it is undoubted that this situation will change very soon.

The toolkit also imposes a lot of restrictions, especially in the field of local testing. Although there are solutions like Docker-Lambda, DynamoDB Local and LocalStack, however, they require painstaking work and a significant amount of configuration. However, all these projects are actively developing, so it’s only a matter of time before the tools reach the level we need.

The impact of serverless technologies on the development cycle


Since your infrastructure is just a configuration, you can define and deploy code using scripts, such as shell scripts. Or you can resort to configuration-as-code class solutions like AWS CloudFormation . Although this service does not provide configuration for all areas, it allows you to define specific resources for use as Lambda functions. That is, where CloudFormation let you down, you can write your own resource (Lambda-function), which will close this gap. This way you can do anything, even configure dependencies outside of your AWS environment.

Since all this is just a configuration, you can parameterize your deployment scripts for specific environments, regions, and users, especially if you use infrastructure-as-code class solutions like CloudFormation. For example, you can deploy a copy of the infrastructure for each branch in the repository to fully test them in isolation during development. This drastically speeds up developers getting feedback when they want to understand whether their code works adequately in a live environment. Managers do not need to worry about the cost of deploying numerous environments, because only actual use is paid.

DevOps has less worries because they only need to make sure that the developers have the correct configuration. You no longer need to manage instances, balancers, or security groups. Therefore, the term NoOps is increasingly being used, although it is still important to be able to configure the infrastructure, especially when it comes to IAM configuration and optimizing cloud resources.

There are very powerful monitoring and visual tools like Epsagon, Thundra, Dashbird, and IOPipe. They allow you to monitor the current state of serverless applications, provide logs and tracing, record performance metrics and architecture bottlenecks, perform cost analysis and forecasting, and much more. They not only give DevOps engineers, developers and architects a comprehensive idea of ​​how applications work, but also allow managers to monitor the situation in real time, with per-second resource costs and cost prediction. Managing this with a managed infrastructure is much more difficult.

Designing serverless applications is much simpler because you don’t need to deploy web servers, manage virtual machines or containers, patch servers, operating systems, Internet gateways, etc. Abstraction from all these responsibilities allows serverless architecture to focus on the main thing - the solution needs of business and customers.

Although the toolkit could be better (it is improving every day), however, developers can focus on implementing business logic and the best distribution of application complexity across different services within the architecture. Serverless applications are event driven and abstracted by the cloud provider (for example, SQS, S3 events, or DynamoDB streams). Therefore, developers only need to prescribe business logic to respond to certain events, and you don’t have to worry about how to better implement databases and message queues, or how to organize optimal work with data in specific hardware storages.

The code can be executed and debugged locally, as with any development process. Unit testing remains the same. The ability to deploy an entire application infrastructure with a custom stack configuration allows developers to quickly get important feedback without worrying about the cost of testing or the impact on expensive managed environments.

Tools and techniques for building serverless applications


There is no specific way to build serverless applications. As well as a set of services for this task. AWS is the leader among powerful serverless solutions today, however pay attention to Google Cloud , Zeit and Firebase . If you use AWS, then Serverless Application Model (SAM) can be recommended as an approach to collecting applications , especially when using C #, since Visual Studio is a great toolkit. SAM CLI can do everything the same as Visual Studio, so you won’t lose anything if you switch to another IDE or text editor. Of course, SAM also works with other languages.

If you write in other languages, then the Serverless Framework is an excellent open source tool that allows you to configure anything with the help of very powerful configuration YAML files. Serverless Framework also supports various cloud services, so we recommend it to those who are looking for a multi-cloud solution. He has a huge community that has created a bunch of plug-ins for any needs.

For local testing, the open source tools Docker-Lambda, Serverless Local, DynamoDB Local, and LocalStack are well suited. Serverless technologies are still at an early stage of development, as are the tools for them, so when setting up for complex test scenarios, you will have to sweat. However, just expanding the stack in the environment and testing there turns out incredibly cheap. And you do not need to make an exact local copy of cloud environments.

Use the AWS Lambda Layers to reduce the size of deployed packages and speed up loading.

Use the correct programming languages ​​for specific tasks. Different languages ​​have their own advantages and disadvantages. There are many benchmarks, but JavaScript, Python, and C # (.NET Core 2.1+) are leaders in terms of AWS Lambda performance. The Runtime API has recently appeared in AWS Lambda, which allows you to specify the desired language and runtime, so experiment.

Keep the package size small for deployment. The smaller they are, the faster they load. Avoid using large libraries, especially if you use a couple of features from them. If you're programming in JavaScript, then use build tools like Webpack to optimize your build and include only what you really need. .NET Core 3.0 has QuickJit and Tiered Compilation that improve performance and help a lot in cold starts.

The dependence of serverless functions on events at first may complicate the coordination of business logic. In this regard, message queues and state machines can be incredibly useful. Lambda-functions are capable of calling each other, but do it only if you do not expect to receive an answer (“shot and forgot”) - you do not want to get an account for waiting for the completion of another function. Message queues are useful for isolating parts of business logic, managing application bottlenecks, and processing transactions (using FIFO queues). AWS Lambda functions can be assigned to SQS queues as hang message queues that track failed messages for later analysis. AWS Step Functions (state machines) are very useful for managing complex processes that require the creation of function chains.Instead of calling a Lambda function another function, Step functions can coordinate state transitions, transfer data between functions, and control the global state of functions. This allows you to determine the conditions for retries, or what needs to be done when a specific error occurs - under certain conditions, a very powerful tool.

Conclusion


In recent years, serverless technologies have been developing at an unprecedented pace. There are certain misconceptions associated with this paradigm shift. By abstracting infrastructure and managing scaling, serverless solutions offer significant benefits: from simplified development and DevOps processes, to large reductions in operating costs.
Although the serverless approach is not without drawbacks, there are reliable methods of design patterns that can be used to create stable serverless applications or integrate serverless elements into existing architectures.

Source: https://habr.com/ru/post/undefined/


All Articles