Exploring the Future of Serverless with Sheen Brisals: Why It's Becoming the New Norm
In a special AntStack TV episode, Prashanth HN, CTO of AntStack, sat down with Sheen Brisal, AWS Serverless Hero, to explore the transformation of serverless technology. Reflecting on Sheen’s early experience and “aha moment”, the two serverless heroes discussed how it is changing the way teams build and scale systems and the importance of shifting mindset and observability in adoption of serverless.
Their discussion offers a unique perspective on the growing adoption of serverless, highlighting its potential, limitations, and future developments.
Whether you’re exploring serverless for the first time or looking to deepen your understanding, these conversations offer a fresh and practical look at the tech’s trajectory. Discover how serverless is becoming an integral part of the cloud technology in our blog, or watch the full episode.
Sheen’s Aha Moment with Serverless
Sheen:
When we were looking at technologies for migrating an old platform into something new, we had our requirements, like not wanting to manage servers or self-host containers. So, as part of that exploration, serverless had just started to get noticed.
We began experimenting with simple solutions like S3, AWS Lambda, and SQS, which was eye-opening for many of us. We decided to put a simple service into production to see how it performed. One Lambda function and an API Gateway were tested during a Black Friday peak period.
The aha moment came when we observed our dashboard; it was pleasing to see how auto-scaling worked with Lambda. As traffic peaked, instances spun up one after another, illuminating the dashboard. This was a pivotal moment for many engineers and me, leading to numerous subsequent insights.
Managing Resistance to New Technologies
Prashanth:
One of the early challenges for engineers adapting to serverless is relinquishing some control. Many ask how to SSH or manage specific tasks, indicating a need for a shift in mindset regarding serverless architecture compared to traditional methods.
When I speak to various companies, I see a little resistance from the internal team because they are used to doing certain things in certain ways. To them, serverless sounds like very new and unexplored territory. So, what's your experience in that?
Sheen:
I believe in providing teams the freedom to experiment within the guardrails of principles. If they fail, it's acceptable; they can pivot to something else. This sort of backing is crucial in larger organizations.
So we worked with leaders who were very supportive of the idea of experimenting with serverless. When something goes wrong in production, two things can happen: you roll back and fix it, or adopt a "fix forward" approach from leadership, which gives engineers more comfort because they can assess the impact and decide how to fix and forward rather than panic.
So, the backing and understanding from the leaders and the management team are always necessary. And also, to some extent, the stakeholders because these days, they don't sit in the dark; they get involved as they understand technology.
If teams are happy with what they do and there are no issues, then it’s fine, as they may take longer to make the change. But we need to understand and allow time for the different teams depending on their mindset.
So that's one of the reasons why you hear a lot about mindset change or mind shift in serverless: This is exactly what engineers need to understand when they work with serverless.
Key takeaways:
Resistance to adopting serverless often stems from established practices within teams, and leadership support is essential for fostering experimentation
A change in mindset and readiness of teams to adopt new technology is crucial for an effortless transition to serverless architecture
Observability in Serverless from the Start
Prashanth:
Many struggle with monitoring and debugging in serverless environments. This requires a change in mindset and also a look into different tools available for serverless architectures.
Sheen:
Yes, the observability you mentioned is core. Because I think now I see the change because it’s part of the development, and it should start from the beginning.
A while ago, we discussed solution design in Seattle, and you asked how services get designed. So, one way I suggest engineers create a solution design document addressing architecture, data structure, security, cost awareness, and observability. I asked Engineers to record what they think would be necessary for that service to be monitored or observed.
They may not be able to bring everything up, but at least when they deploy the service, as soon as they start getting the data, they will have something there for them to keep an eye on.
Key takeaways:
Observability must be a priority from the beginning of any serverless project
Recording necessary monitoring elements ensures nothing is overlooked during deployment and allows teams to respond proactively to issues post-deployment
Is Serverless Dead?
Prashanth:
Coming back to mindset, you can offload a lot of things to the cloud provider; your compliance can be easier. But recently, we have been hearing a lot of controversial statements coming out, as some people are saying serverless is dead, and some people are saying go back to on-prem.
I think both of us were early movers into serverless, and by now, we would expect serverless to be propagating much better, and adoption should be much higher. But instead, we are hearing these hot takes. So, what do you think about all of this?
Sheen:
It's an interesting topic. So serverless adoption is still spreading or growing, but the fact is not many teams come forward and share the experience.
The discourse around serverless often reflects human emotions; some assert that traditional servers are sufficient for their needs and that's fine if that is what they need for that particular business.
But modern enterprises are changing all the time. Even if they're not expanding, they are changing the technology to supply the features and requirements for customers demands. So this is where these new technologies like serverless bring the capability to move ahead.
Of course, mistakes happen, and I know there have been stories in the media saying that it's expensive, complex, etc., so that's where the upfront thinking comes in.
That's why I always preach about serverless adoption: Look after the guardrails and principles instead of jumping head-first into serverless. Serverless has to fit the purpose. If your use case cannot be solved by serverless, there is no need to push it to and to use that.
For example, at my previous company, we needed a serverless solution but faced limitations due to event broker constraints requiring pull connections instead of push notifications from AWS services. In such cases, opting for containers was more appropriate.
Engineers were saying, “Sheen, why don't we just run Lambda for ten minutes, and then it will go off?” It will work, but why complicate things?
These are the situations where we need to be flexible, understand what we need, and bring in the right technology.
Prashanth:
That totally makes sense. Pick the best technology for the use case instead of just slapping one technology on every problem you have.
Key takeaways:
Serverless adoption is growing, though hurdles exist due to concerns around cost and complexity
Serverless isn't a one-size-fits-all solution
Organizations must adapt their tech strategies to meet specific use cases and evolving customer demands
Serverless Is The New Norm
Prashanth:
The container was a predecessor for Lambda, and then we, the serverless keyword, crept backward to do serverless containers. Now I see that is happening in the Gen AI space.
Interestingly, many Gen AI services don’t label themselves as serverless—AWS Bedrock doesn’t call itself that while OpenAI operates in serverless but avoids the term. This raises the question of whether "serverless" is becoming the new norm.
Sheen:
You hit the exact phrase the “norm” - it's become a norm because you don't need to explicitly state that.
And coming back to the sort of serverless naming, the way I see it, the services where AWS forcefully added serverless also had the other offering, so when they fit in serverless, they had to differentiate. Whereas if you look at S3, SQS, or Bedrock, they are all managed services. We don't need to state them as serverless explicitly; that's how you know I see it.
Speaking of Gen AI, there are two different ways of looking at it. Bedrock is a completely capable managed service, whereas when you get heavier models that require heavy compute power, obviously, the container option is also there.
AWS is adapting its services accordingly. For example, API Gateway uses up-the-timers to cope with prompt engineering. That sort of change will happen, but we will clarify the services for the future supporting AI, LLM, etc. They manage serverless services, and for heavier use cases, of course, there are other options.
Prashanth:
We are also seeing a lot of interest from the AWS side in improving the existing services to cater to Gen AI demand. For example, Lambda response streaming is very useful whenever you are interfacing with an LLM to stream the LLM's response to the client.
So, I think serverless is definitely not dead. In many cases, it will be the norm, so you don't need to talk about it anymore explicitly.
Key takeaways:
As technologies evolve, serverless has started to become a norm rather than an exception
It has become so integrated into cloud offerings that it's not labeled explicitly anymore.
The Future of Serverless and Containers
Prashanth:
So thinking about adoption, I was looking at DataDog’s State of Serverless Report, and lot of adoption is actually coming from serverless container space. I started to think maybe the notion about serverless is so heavily biased towards just Lambda that people probably miss out on a lot of other things happening in that space.
Sheen:
Recently, I've been hearing a lot about serverless containers, so that's one reason why AWS is bringing both container and serverless into the same bowl, which is good in a way. If you think of serverless, managed services, or what we call serverless services, have existed for years. If you take SQS (Simple Queue Service), the beta was released in 2004. So 20 years on, we have had serverless; we just did not have the word “Serverless.”
If you place all the managed services around and then put Lambda, Lambda gets all the attraction because that is the compute. So, people push it to the limit - and they hit the limit like 15 minutes execution time cap, which prompts their interest to look for alternative serverless containers.
The other thing is that bringing heavy workloads, like 6-10 gig for 10-15 minutes, will cost a fortune. While occasional usage may be viable under Lambda's model, constant execution can become prohibitively expensive.
They want to be in the serverless space, but they have limited information on what Lambda can provide, which makes the container option attractive.
That's how the ecosystem grows to bring in more use cases, whereas we already have all the other services that happily support this ecosystem. I think that it's a good thing as long as people identify the boundaries like, “So here is the point I need to look at something else.”
Sheen:
I'm hoping that the Lambda ecosystem within serverless could transform into something bigger.
We hear about serverless containers that can shut down startups. But it depends on the type of job you're running. If you're running a persistently connecting function or program, you don't have the liberty to shut down and start up, etc.
I'm hoping to see more changes slowly coming in to take the whole Lambda ecosystem to a different level, which will support all other services and strengthen the ecosystem.
I keenly watch the Gen AI space as there's so much noise.
Another area is event-driven architecture. It's a good way of doing things, but it also generates quite a lot of noise in terms of complexities. So, it needs thoughtful implementation practices to build confidence among teams navigating complexities especially when you have the distributed system architecture all around. That's how that technology is evolving to cope with all the demands.
Prashanth:
On one side lies EDA as an architectural pattern, while on the other is LLM development focusing on agents performing actions—I foresee these two domains converging effectively over time as they evolve together.
Sheen:
For newer generations of engineers, these concepts may seem novel; however, event-driven architecture has long been foundational in operating systems like Windows—it’s only recently gained mainstream attention for commercialization purposes.
Key takeaways:
The rise of serverless containers highlights a trend in cloud computing
Understanding the use of Lambda vs. container solutions is important for optimizing costs and performance based on workload requirements
The key is choosing the right technology for specific use cases, with event-driven architecture playing a central role in future developments
Prashanth and Sheen’s real-world experience offers valuable lessons for tech leaders navigating the complexities of the serverless landscape.
Catchup on the episode on AntStack TV and subscribe to hear from serverless experts as they share practical insights and strategies making technology work for you.