Serverless computing is a cloud computing execution model in which the cloud provider allocates machine resources on-demand, managing the servers on behalf of their customers—allowing developers to focus their time and efforts on the business logic for their applications and processes.
Some of the core attributes that distinguish the serverless model are:
Fundamentally, this model is about focusing more on code and less on infrastructure.
Often used interchangeably but we should consider FaaS to be a subset of serverless.
The serverless model applies to many service categories, such as compute, storage, database where the management, configuration and billing of backend servers are abstracted from the end-user. On the other hand, FaaS, while perhaps more commonly used in serverless architectures, focuses on the event-driven computing pattern where the code only executes in response to events or requests.
The leading cloud providers all have multiple services available that fit the serverless model, e.g. AWS Lamda and Amazon API Gateway, Azure Functions, and Google Cloud Functions and App Engine.
Where does Automic Automation fit in the landscape of a serverless system?
For the remainder of this discussion, I will focus on AWS Lambda as a serverless technology, but you can extract the concepts for any provider.
AWS Lambda can run your code in response to events, such as changes to data in an Amazon S3 bucket, an Amazon DynamoDB table, or a REST call from the Amazon API gateway. It can easily trigger/use any other AWS service.
What if my business process requires me to integrate from the AWS cloud services to my on-premises Mainframe system? Automic Automation is perfect for automating workloads on mainframes and across distributed systems on-premise and in the cloud. What we need is a reliable integration between those systems.
Imagine the following use case: Your mobile trading application generates and stores transaction data on S3 cloud storage. To verify and process these transactions must be processed in your on-prem application running on a mainframe.
How can we trigger a mainframe job every time a file is uploaded by the application to an S3 Bucket?
A polling mechanism would be an option, using an Automic job that checks the S3 bucket for new files every minute or so. Using these semaphore concepts is as old as computing, but it is inefficient and will always incur delays based on the polling frequency. We need a proper event-driven approach for today's modern applications.
A more robust solution is to use an AWS Lambda function that listens to the S3 bucket and triggers every time a new file is uploaded, this function then runs an Automic Job by calling the Automic REST API. Automic can then download the file from S3 cloud storage and transfer it to the mainframe and complete the processing needs.
This provides us with real-time responses to events occurring in the serverless environment, allowing complex workflows to complete processing across our environments and bringing visibility to operations, and applying SLAs to the delivery of the downstream processing.
I have provided detailed steps to implement and all the materials you will need to run this in your environment, you can find the materials here, start with the README.md.
The milestones involved to implement are:
I hope you find the use-case interesting and GitHub materials useful. For a step-by-step implementation guide, read this Skill Builder in the AIOps Skill Builders section of this site.