GOTO EDA Day with AWS

Daniel Vaughan
5 min readSep 7, 2022
Gregor Hohpe in full swing

Last Thursday, I attended the GOTO EDA (Event-Driven Architecture) conference at CodeNode in London. As someone who used to go to events at CodeNode before the pandemic, it felt great to be back in the building for the first time.

Rather fittingly, the keynote was from Gregor Hohpe. At the beginning of the pandemic, I sat outside on a tiny patch of grass that cannot be called a garden, reading his books on the role of the modern architect and cloud strategy.

Since joining AWS in 2020 Gregor has been an advocate for using AWS native services. His keynote reinforced many of the messages I had read about but added new advice on the event-driven approach to software.

Some of the gems I paraphrased:

  • “Architect” is a way of thinking rather than a title. It differs from a pure developer as it requires seeing in more dimensions, being able to zoom in out, and seeing in shades of grey.
  • Separate your architecture (design principles) from your product choices.
  • Architects are like great chefs: it’s not just about selecting ingredients, it’s about how to put them together.

Gregor talked about loose coupling of EDA, the trade-off and the benefits summarised as:

  • Limiting the change radius so development teams can move faster.
  • Limiting the error radius to increase reliability and fault tolerance.

Both these ultimately reduce costs as developers are better utilised and fewer operations people are needed. I often see organisations need to address these two before any other cloud benefits become significant.

He also emphasised the subject of the conference was Event Driven (responding to events) and Event Orchestration. It was not about Event Sourcing that, according to Gregor, was challenging to get right. This set the scene for the rest of the day, which was explaining how the AWS platform and patterns support these principles.

In a hallway conversation, I heard a challenge AWS faces is that each AWS service is owned by a “two pizza team” that enables them to get services to the market fast. However, as the teams are independent, integration between services doesn’t tend to be as good as in other public clouds. AWS is now attempting to address this by assembling toolkits of services around the “personas” of users and EDA is one of these.

This fits in with my experience of working in cloud-native. To be successful teams need the platform, the patterns and the programming model and it is encouraging to see AWS present that at this event. While to me EDA appears to be a rebranding or evolution of “serverless” that is no bad thing.

During the day it was noticeable how few times containers or Kubernetes were mentioned. The one point I did hear about Kubernetes was in a case study where a customer quoted a 30% cost reduction from moving from Kubernetes to Lambda but it was not clear if this was just a saving due to idle Kubernetes cluster capacity.

EDA is pitched as a “next-generation” model allowing developers to step-over containerisation and onto AWS’s fully managed services, especially Lambda.

Most importantly, these are all services that are proprietary to AWS.

I first build a system with AWS Lambda in 2016, shortly after it was released. When using Lambda code is executed in response to events and billed only on CPU and memory used while executing. This makes it potentially highly cost-effective for infrequently used systems or systems that need to scale up and down rapidly.

At the time I loved how easy it was to build and felt like I had a big box of Lego with which I could get from idea to realisation with little friction. I enjoyed being able to come up with a feature for my pet project in Starbucks and within an hour having it running in production. This was the experience that sold me on Cloud Native Development and led me to want to share my experience with others.

Coming back to AWS after a break of a few years was useful for me. The main limitation I remember was how easy it was to create spaghetti or interconnecting services. Testing, local development and deployment management meant it was difficult to use at scale. I was happy to see that several new services have been introduced to address these limitations.

  • Event Bridge — a centralised event bus service that provides event schema rather than the SNS and SQS approach I used.
  • AWS SAM — a tool for building CI/CD pipelines for serviceless applications.
  • AWS CDK — a development kit for AWS that like AWS SAM helps a lot with automation. I am guessing the overlapping products are the result of the “two pizza” teams mentioned above.
  • AWS Step Function — for automation of functions into pipelines. This was something that was just coming out when I used AWS but has now matured.

Observability still appears to be an issue, while there are native services like AWS X-Ray it sounds like there are significant limitations which means third-party tools like Lumigo are recommended.

It is good to see improvement in this space and the “serverless” development model I enjoyed mature. However, I can see the obvious AWS-centric limitation in this approach. It looks fantastic for rapid iteration but I can see potential challenges in teams and in production as I am not convinced testing or observability has been solved. I also have concerns about scaling as limits on services and concurrency of Lambda appeared to be a significant consideration.

There is also the dread of making a mistake and getting a huge bill. I would have expected this to be solved but there was still the risk of accidentally creating a loop of requests for example. While AWS did mention some “loop detection” features the risk is still there. I would want the option of saying if my budget is exceeded, stop, as I have set up on Google Cloud.

In the concluding panel discussion, the point was made that the EDA model described in the conference was cutting edge and only 0.1% of developers are developing this way so uptake is not huge yet. This fits in which AWS architects I have spoken to before who have said building in an AWS-native way is rare. Although I have enjoyed this style of development before I believe there is a place for containers especially when it comes to portability not just between clouds but between developers’ laptops and the cloud.

In an enterprise environment, I would still favour containers. For example, I like Knative which is effectively containerised serverless and implemented as Google Cloud Run and IBM CodeEngine example. I like these types of cloud services where there is an open-source equivalent and for enterprise use cases that also makes people a lot more comfortable.

I also like that containers can fall back or progress to Kubernetes depending on how you see it either on the cloud or on-prem if needed. Services like Google GKE Autopilot for example which use Kubernetes but bill in a serverless style seem to me to be the best of both worlds.

The goal in my opinion is to have one toolkit that provides the rapid innovation for PoCs of something like Lambda but is scalable, secure, manageable and to some degree portable to production. While the AWS EDA toolkit presented at the conference is likely to meet the first requirement I still have my doubt about the second.

--

--