Serverless Success Stories with Sharath from Azuga, a Bridgestone Company!
Join Prashanth HN, CTO of Antstack, and Sharath, the Vice President of Engineering at Azuga, as they discuss Azuga’s remarkable journey of adopting serverless technology. From their initial hesitation to becoming strong advocates of serverless, Sharath shares the challenges and triumphs of transitioning to cloud-native architectures.
The conversation highlights the mindset shift required for successful serverless adoption and shares real-world implementation. Read the blog or watch the full episode for a practical and forward-looking perspective on the evolving role of serverless in the modern tech world.
Azuga’s Early Days and Growth
Prashanth:
You have been with Azuga since the first day. How has it evolved over the last 12 years from a small startup to the large company it is today? How has it been for you?
Sharath:
I am currently acting as vice president of engineering for Azuga. It's been close to or little over 12 years now.
Azuga provides end-to-end solutions for Fleet Management Companies across all verticals. We focus on vehicle health, road safety, driver behavior monitoring, and more. We help our customers transform driver behavior positively. We also work with insurance companies and government agencies in the Insure-tech space.
It’s been a good ride. Like any startup, we wore multiple hats. There were very few of us at the start, and we did everything from sales to customer support.
As a startup, we had to prioritise shipping faster, getting our first customers and getting the feedback to improve the product! Doing it the right way wasn’t our priority at that time. We incurred technical debt in the early years, but over time, we became more mindful of doing things the right way. This shift in mindset happened after a few years until then, it was 100% business focus.
Decision of Transitioning to Serverless
Prashanth:
I remember when I first proposed serverless as a solution to you, you seemed hesitant. What was going through your mind at the time?
Sharath:
I recall the first email about serverless back in 2016. We were growing rapidly company-wise, volume-wise, and customer-wise, and it didn’t seem like the right time to distract ourselves with something new. But serverless was intriguing.
We attended an AWS Dev Day conference in 2016, where we learned more about DynamoDB. We talked about doing a PoC to see what it's all about. But there was still hesitation—we hadn’t seen anyone build an enterprise B2B application using serverless. However, when we met in 2019, what more do you need when you have the very first serverless hero from India to help you build the system. We saw it as an opportunity to start with a clean slate, with a greenfield project, and implement serverless from the start. AntStack played a significant role in helping us overcome our hesitation and transform our mindset.
Expanding Serverless Beyond Initial Phase
Prashanth:
How did you transition? Did serverless adoption remain confined to just the green-field project, or did it spread across the organization?
Sharath:
It definitely spread beyond the initial project. The product helped us transform the mindset we needed. We faced many challenges, whether with people, complexity, or resistance to change. Embracing serverless for the first time can feel intimidating. I would get questions like, “How do I log in?” How do I access my instance? How do I restart it? “Can I restart it?”
But once we moved past that initial uncertainty, it was a different world. The impact wasn’t limited to just that product—it helped us make other products 100% serverless. Our core product, too, has seen a significant shift, with its workload running on serverless architecture. We now refer to it as a “cloud-native architecture,” as people often equate serverless solely with Lambda functions.
We’re becoming increasingly cloud-native—not just with what we’re building new but also by actively migrating some of our existing workloads.
Key Takeaways:
The early stages of serverless adoption can be challenging, but you move past the uncertainty with experience.
Transitioning to serverless requires a mindset shift, embracing new ways of building and managing applications.
Green field projects are the easiest to experiment with Serverless and test the waters
Migrating existing workloads to serverless is an important step in becoming truly cloud-native, it helps organization realize the full benefits of the cloud.
Overcoming Technical Challenges & Scalability
Prashanth:
I never imagined that serverless could expand beyond the project we were focused on and be adopted across the fleet. My perception of Azuga was that it was a massive undertaking. I remember when we connected to explore migrating the database to a different type to improve scalability. At the time, the scale of data we were discussing was mind-blowing.
Could you share some details about the scale? Are all the events coming from the fleets? Something slightly technical but still high-level?
Sharath:
One reason serverless is such a good fit for us is that everything in our products is event-driven. The triggers—the sources—come directly from what's happening in the field. For instance, if a driver brakes, five things might happen in the system. If there’s an accident, ten other things could be triggered. The entire system revolves around events from the field, so serverless and event-driven architecture align very well.
On the technical side, though, the challenges are not as simple as picking and replacing one service. The existing system has many dependencies. We had to map out where we are today and where we wanted to be over the next few quarters. We also had to design a transitional architecture to bridge the gap from point A to point B.
It’s not all black and white. You can’t just move a service to serverless, especially from an existing architecture. It requires workarounds. In some cases, yes, we could move one service at a time, but in others, we needed transitional architectures. I like to call it “open-heart surgery on production.” That was the biggest worry initially—you’re not rewriting everything for 12 months and then flipping a switch to go live while shutting the old system down. That kind of “Big Bang” change is always expensive.
Instead, we approached it iteratively. Where possible, we moved one service at a time, and in other cases, we used transitional architectures. It’s been challenging, but we’ve done a good job so far.
The numbers are immense on the volume side. The events coming from the field are just one aspect. We also have a lot of API traffic, as many partners actively use our webhooks integrated with our APIs. The volumes on those are very high as well.
One good thing is that despite this high volume, we’re saving costs with serverless workloads—contrary to what some articles might say.
Key Takeaways:
Moving to serverless isn't as simple as swapping one service for another. Existing systems often have complex dependencies, requiring careful planning and transitional architecture to bridge the gap.
Impact on Operational Efficiency
Prashanth:
How has it helped with management or operations? Previously, you were responsible for instances, containers, and a lot more. Now, with AWS’s shared responsibility model, you’re theoretically pushing more toward the cloud and less onto yourselves. Has that worked out as expected in practice?
Sharath:
Initially, when you first embrace serverless, there’s this perception that you’re losing control over your systems. But once you move past that phase, the benefits become clear: you no longer have to handle the ten different tasks you’d otherwise be responsible for when managing your own infrastructure. This shift allows us to focus 100% on adding business value.
Our teams are now structured to give each one full visibility into the specific business area they support. This has been a game-changer for us, allowing them to concentrate entirely on delivering value instead of worrying about infrastructure, scaling, and related operational concerns.
Another benefit is how it empowers developers. Engineers now have full control over their own deployment cycles within their workstreams. They’ve become more self-sufficient, managing their workloads and products independently.
Operationally, it has significantly reduced overhead.
Key Takeaways:
Initially, serverless feels like losing control, but over time, the shift to the cloud reduces the burden of managing infrastructure, allowing teams to focus entirely on business value.
Serverless gives developers autonomy, allowing them to control their own deployment cycles and manage their workloads independently, leading to increased self-sufficiency and efficiency.
Offloading infrastructure management reduces operational overhead and frees up resources to concentrate on more strategic tasks.
Cost Savings Beyond Infrastructure
Prashanth:
So, when you mentioned cost reduction, I assume you were only referring to the plain infrastructure costs, right?
Sharath: Actually, it’s both. Depending on the volume, infrastructure costs have been significantly lower for some workloads. So far, we haven’t reached the threshold where we’d pay more than in the traditional model.
But when I talk about cost, I’m also considering other benefits. For example, because we use Node.js, we naturally see more developers transition into full-stack roles. Most of our products are built on Angular and React, and transitioning from Node.js as the backend to full-stack development has been very seamless for our teams.
Additionally, with function-as-a-service (FaaS), we’re compelled to think modularly. This approach reduces the blast radius and forces us to build modularly. Over time, this mindset has become second nature for us.
So, when you look at the net benefits, it’s not just about the dollar amount on infrastructure. While you could assign some monetary value to these gains, the overall positives—developer agility, modularity, and reduced risk—are significant.
Key Takeaways:
The true cost savings come from other factors, such as increased developer efficiency and agility.
Serverless encourages a modular approach to development, which reduces the risk of large-scale failures and makes it easier to manage and scale applications.
Transforming Team & Workflows
Prashanth:
So, as an organization, you had separate DevOps and development teams. Now that you’ve transitioned to serverless, and developers own the deployment process, does that mean you’ve moved to a single-team model? What’s been the impact of serverless on the way your organization operates?
Sharath:
We still have a good number of container-based workloads, so I want to go back to the earlier point. A purist approach to anything isn’t practical. If we try to prove that serverless can solve every problem, we might work around issues, but that’s not the right way to operate. We have many highly scalable and performing services, so there’s no need to migrate those to serverless unnecessarily.
The goal isn’t to move 100% of our workloads to serverless; it’s dependent on the use case. That said, most of our use cases are a good fit for serverless. To answer your question, we still have container workloads and other products, but the transition is ongoing.
Having a team that’s very receptive to the change has worked well for us. They’re actively helping one another navigate this transition and learn the necessary skills.
Prashanth:
So, what has changed? What has made the team more receptive to serverless and willing to embrace this new approach?
Sharath:
It’s about having the data to prove the value. When we started, serverless was more of a theory—a concept. We built one product that became a big success story within Azuga. Then, we brought serverless to some of our other mainstream products, which operate at a much higher scale.
For example, we recently migrated our video telematics platform—a core product offering with a camera-based solution—from a Java-based system to serverless. It’s a high-scale product, and the migration has been a huge success. There have been zero scaling issues and zero bugs, and it’s been a few quarters since we moved it.
As we’ve completed more workload migrations, more team members have been willing to “dirty their hands.” Initially, there’s hesitation—questions like, “Why should I move?” But they see the benefits once they jump in and try it for themselves. People love it because they can go live faster, with rapid development and agility that are immediately visible.
It’s been a transformational journey that took time to get everyone on board. We started small, with one team focusing on one module at a time within the framework of our transitional architecture. As we progressed, mindsets began to shift.
Now, even some of our traditional Java developers have upskilled and become full-stack developers. The difference compared to two years ago is remarkable.
Key Takeaways:
Moving to serverless isn't about replacing everything. Container-based workloads and other highly scalable services remain in use, with serverless adopted based on specific use cases.
The focus should be on adopting serverless where it makes the most sense for scalability and performance.
Shifting Mindsets
Prashanth:
Now that you’ve become a strong advocate for serverless and are adopting it across your organization wherever possible—migrating from containers or EC2 workflows to serverless architecture—how has this changed your mindset toward technology? Has it influenced how you approach decisions or evaluate solutions?
Sharath:
Absolutely. Technology-wise, it has conditioned us to think in this direction. Whenever we make a choice, we now naturally ask: Can this be achieved in a cloud-native environment? Do I need to manage it? What can I offload so I can focus on business needs instead?
This thought process has become second nature, shaping how we evaluate solutions. The first consideration is always whether it fits within a cloud-native, serverless model, which has been a significant influence. However, no one size fits all—our decisions remain use-case-driven. Still, the preference is to explore whether it can operate in a serverless environment and minimize the management burden before considering other options.
Prashanth:
Basically, you may not throw serverless at everything, but you look at every problem through a serverless-first lens. If it doesn’t seem like the right fit, then you move on to explore other technologies.
Sharath:
Exactly. Plus, many services we use today are serverless behind the scenes—even if they’re not labeled as such. It’s becoming the norm in the industry, which is great to see.
Key Takeaways:
Serverless is not a one-size-fits-all solution. The decision should depend on the specific problem to ensure you choose the best solution.
Instead of implementing serverless at everything, consider evaluating every problem through a serverless-first lens. If it doesn't fit, explore other technology.
Don’t Resist Serverless Without Trying
Prashanth:
This will be my final question. What would you like to tell anyone who is still deciding whether to go serverless?
Sharath:
Don’t be a purist. Don’t resist serverless based on what you read or hear. I’ve seen many people, even senior engineers at Azuga, sharing blogs and articles about why serverless isn’t a good idea. My advice is to get your hands dirty and try it out yourself. Don’t rely on articles.
At first, there will be resistance—embracing serverless isn’t easy at the beginning. Be patient and push through that phase. If, after trying, it’s still not the right fit for a use case, then you’ll have a solid reason not to embrace it. But don’t reject it just because some article says it’s not the right choice.
The real benefit comes once you accept that these are managed services. You can build production applications and workloads without needing to log into a server and change the parameters yourself. That mindset shift is key. So, give it a wholehearted try. Move past the initial hurdles and embrace it. Just don’t resist without truly experiencing it yourself.
Key Takeaways:
Don't dismiss serverless based on hearsay or articles. The best way to understand its potential is by trying it firsthand.
The initial learning curve may be challenging, but with patience and persistence, the benefits become evident.