top of page
  • Writer's pictureShaun Anderson

FaaS, Functions and Serverless... Déjà vu?

Updated: Feb 5, 2021

Similar to microservices, Functions as a Service and Serverless Computing are cool ideas with some obvious utility and some less obvious drawbacks. Suffice it to say that serverless does not solve all problems for all situations any more than microservices, client server or monolithic applications do.

Interestingly, for many greybeards and greyheads who have mainframe experience, FaaS and Serverless look and act almost exactly like mainframe load modules of decades past. It is a proven solution.

 

Use cases

If you look on various sites such as the AWS Lambda site you will see common uses for functions in the following realms:

  • Data Processing

  • State change engines (Like Docket Based Choreography incidentally)

  • Real-time file processing (file watchers, etc.)

  • Transcoding (videos, thumbnails, indexing, etc.)

  • Filtering (including anti corruption layers possibly)

  • Data cleansing, stream processing

  • IoT device telemetry

  • Calculation engines

  • Device skills

The common thread here is that there is not much state. The functional domain is fairly self contained. They are often NOT parts of a bigger whole necessarily. Also, functions are inherently dumb (not stupid, they just don’t have much info). This means that APIs, Messaging, Triggers, Gateways or Data Services need to be configured to make them useful. The more info the function needs to do it’s work, the more it begins to look like a distributed microservice -- strange concept.


 

Danger Will Robinson

Like any technology, there is a point where the law of diminishing returns is reached. There is a cost or serverless tax that needs to be considered when deciding whether to use functions. Often the best solution is a combination of technologies.


To work, a function needs some input and typically produces some output. That input needs to come from somewhere like S3, a message queue, a data store of some kind or through an API. Similarly, the output needs to go somewhere too and the same cast of characters are common choices. As a result, the more things that need to happen in a thin slice of functionality from the perspective of an application, the more functional daisy chaining you need to do (unless you want to make big, fat, bloated functions -- then you might as well be creating micro services). Once you start stringing a bunch of functions together, it becomes much harder to manage, conceptualize and test slices of functionality. At some point you can reach the intersection where the cost benefit of using functions is outweighed by the cost of managing the complexity. I see a natural progression being:

  1. Functions and serverless are cool! We get cheap scalability and maintenance and deployment is easy!

  2. Usage of Functions is taking off.

  3. Wait, we have slightly different workstates for each of these functions. Functional groups are created so each flavor of calculation, for example, gets its own function.

  4. Now there needs to be “smarts” around how we differentiate and route between the new flavors of functions.

  5. Queueing, triggering, routing needs to happen

  6. Workstates become a thing again -- possibly driven by tables.

  7. Functional OLTP and Transaction Management tools begin to appear to aid in the orchestration with functions (key a resurgence of workflow engines, assuming use of serverless takes off)

  8. Finally, you have achieved a very similar solution to CICS and Load Modules on the mainframe. You now have mainframe in the cloud. Which isn’t necessarily a bad thing.

 

History repeating -- This was invented in the 60s

As a former PL/1 developer on MVS systems, I have seen a lot of similarities to the way serverless approaches problems and what I remember about the programs I wrote “back in the day”. Many of the load modules, whether COBOL or PL/1 were small chunks of code (functions) that provided a specific value and spit out the result somewhere -- typically to very fast data stores like VSAM or DB2. My understanding is that in the late 1960s, as mainframes were starting to do more than batch processing -- when the big beehive green screen terminals started to be used, those super fast, ultra reliable batch modules needed to have real-time integrations. This led to transaction services such as CICS to come into vogue to manage and shepherd requests from module to module. This made it easier to keep the mainframe flavor of tight coupling of the OS with the specialized hardware (CPUs, SAPs and I/Os) that have been optimized for certain calculations while improving the “user experience”. Ultimately, you have a large set of specialized, optimized functions that become more useful because of the transaction manager.


Sound anything like having GPU assigned serverless functions? Seems reasonable to think that the “new hotness” could end up looking very similar to the “old hotness”. It’s interesting to me to contemplate at least.


 

Serverless and Microservices

In the short term, I like the idea, still, of figuring out “how your system wants to behave” then using that information to feed the implementation decision making. Breaking down the system by capability (domains) and sub-capabilities (sub domains) may make it easier to think of your system as services that “own” a particular transaction or thin slice such as “submit order”. To achieve that, the service can throw a message over the wall for a “tax calculation” function to chew on, for example. Essentially, the service can act as the OLTP or “capability owner” and functions become simple, but efficient engines that are organized by their relationship to the context of the service. See https://www.swiftbird.us/docket-choreography as an example where the “Engines” described can easily be implemented as functions while adding value to the overall capability.


I think it works to think of your Microservices as context managers and serverless functions as specialized workers. They can work very well together, even when choreographed and not orchestrated. Having a bunch of unmanaged workers running willy-nilly throughout your enterprise seems like a mistake to me though.


I guess time will tell!

345 views0 comments
bottom of page