SESSION + Live Q&A

After Acceptance: Reasoning About System Outputs

Modern software development allows us to prove that new work is functionally complete. We write a set of executable specifications. We automatically execute them in the form of acceptance tests as part of our continuous delivery pipeline. When all the tests pass, we are done!

This approach is superior to what came before it but is by no means perfect. Testing frequently ends at the point of release, meaning that bugs in production can be caught late and by the end users. Pragmatism dictates that exhaustive acceptance testing is infeasible. Indeed, tests are likely to represent only a simplified version of user interactions. In production, data will almost certainly be generated by code paths that have not been fully exercised in acceptance tests. That data is usually decoupled from our acceptance testing environment. If the current version of the system generates durable data, how do we know that future versions will consider it valid and act upon it appropriately? How can we find out about bugs after acceptance, but before our customers do?

This session will walk through some techniques for bringing your testing to production. It will show you how to sanity check a live system using end to end testing, limiting interference with real user interactions and outputs. Finally it suggests ways to observe and integrate real production data into a continuous delivery pipeline and assert on the validity of the production system's outputs.



Speaker

Dr. Stefanos Zachariadis

Senior Software Engineer

Stefanos loves to code and has done so professionally for over 12 years. His career has taken various twists and turns: From academia to writing satellite software for the European Space Agency; flight search software for a major airline; test automation for various banks and steam turbine design...

Read more
Find Dr. Stefanos Zachariadis at:

Location

Whittle, 3rd flr.

Track

Observability Done Right: Automating Insight & Software Telemetry

Topics

ObservabilityInterview AvailableAcceptance Testing

Share

From the same track

SESSION + Live Q&A Observability

Avoiding Alerts Overload From Microservices

Microservices can be a great way to work: the services are simple, you can use the right technology for the job, and deployments become smaller and less risky. Unfortunately, other things become more complex. You probably took some time to design a deployment pipeline and set up self-service...

Sarah Wells

Former Tech Director for Engineering Enablement @FT (Financial Times)

SESSION + Live Q&A Microservices

Do You Really Know Your Response Times?

With the recent surge in highly available microsevervices with high incoming traffic, it is becoming more and more important to know how your service is performing right now and to be able to diagnose issues in production quickly. It took a while for us to understand how to produce meaningful...

Daniel Rolls

Collecting and Interpreting Large-Scale Data Collected @SkyUK

SESSION + Live Q&A Serverless

Monitoring Serverless Architectures

Serverless architectures are attracting more and more interest from the IT professionals and companies hoping to lower the costs of creating and operating distributed systems without constant worrying about availability, scalability and capacity management. Despite all the attractive properties...

Rafal Gancarz

Lead Consultant @OpenCredo

SESSION + Live Q&A Observability

Observability, Event Sourcing and State Machines

What is a way to have complete transparency of the state of a service? Ideally we would record everything - the inputs, outputs and timings - in order to capture highly reproducible and transparent state changes. However, is it possible to record every event or message in and out of a service...

Peter Lawrey

CEO @Chronicle_SW

SESSION + Live Q&A Open Space

Observability Open Space

View full Schedule