DISQO software engineers were not always responsible for testing their code. Up until two years ago, we had a separate QA team. Engineers would submit their merge requests to the QA engineers, who would run a series of tests to ensure the feature was working. Sometimes, there was a bit of difficulty in communication between the two because of the complexities of our system. My team works with a monolith, and when we had a separation between engineering and QA, this emphasized our many interdependencies and non-comprehensive documentation.

Once given the thumbs up from QA, the engineers would put the code on our staging environment and schedule deployment for Tuesdays and Thursdays. Back then, the deployments were large, and we only had six software engineers and three QA engineers. Our QA team needed to test everything prior to deployment. Once we deployed, the QA team also ran the same tests in production.

The change in testing responsibilities happened when our current Director of Engineering, Marco Huerta, came on board. He was initially hired as the Director of QA and drove efforts for QA automation. After we implemented automated testing, Marco and our CTO, Drew Kutcharian, started discussing combining the engineering and QA process. These meetings’ resulting actions would drive a cultural shift towards an engineering organization where everyone is responsible for quality code. Now at DISQO, software engineering is not just about coding and meeting business requirements but also about extensively testing.

Testing often and during build, known as Shift Left Testing, is not a new concept in software engineering. It was formally introduced in 2001 by Larry Smith on Dr. Dobb’s, a site dedicated to software development. The methodology requires expending more initial effort, but paids off with high-quality results. When we test too late, we run the likelihood of misinterpreting requirements, making debugging more difficult and extensive, creating more technical debt, and more.

Now at DISQO, software engineering is not just about coding and meeting business requirements but also about extensively testing.

In this article, we will talk about acceptance testing specifically within my team, the Purple Electric Penguins (PEP). I’m the Product Manager, so I work very closely with the team daily in all phases of development. Ours is the largest team and works on a monolith application, which requires consistent maintenance and new features. We work on the primary client-facing DISQO product, surveyjunkie.com. So if we deliver features that don’t work, not only does it cause issues for clients and our reputation, but it also creates a greater demand for our Customer Success team.

The Goals of Testing

We want to reach two main goals when acceptance testing. First, we want to ensure that what we build is in alignment with what the business needs. Second, we want to ensure that any new code meets design consistency and aligns with privacy laws.

The first goal of testing is that the feature must work as expected according to business requirements. We discuss how a feature should behave during refinement and subsequent meetings. Product Managers like me provide the business requirements and background, or what we hope to achieve and why.

The second goal of testing is ensuring the feature is built according to design requirements and privacy concerns. The Product Manager works with our Designer to ensure that all design specifics are explicit, including fonts, colors, and layout. With precise design direction, we equip engineers to build in alignment with the design. The end goal is to make the client experience feel consistent throughout.

Technical Components of Acceptance Testing

The engineers test each other’s code; and they have true ownership of the product. In our first round of testing, engineers test the feature in different environments that replicate production. After this, we move to User Acceptance Testing (UAT). UAT starts with meetings where I come in and test the feature with the engineers. They present what they’ve been working on, and I try out different user flows to get to the same scenarios and see if the feature is working as expected. UAT is the final process before the feature is signed off for production.

Site Performance

The feature has to maintain or improve the current site performance. Performance can be broken down into two separate categories, technical and business. On the technical side, we monitor for errors, response time, and web server health. On the business side, we measure Survey Junkie site performance using various metrics including conversion rate, number of survey starts, number of survey completes, number of redemptions, and revenue per survey complete.

Feature Flag

For every new feature, we add a feature flag option. We do this because we want to make sure there’s a way to turn off the feature quickly if there’s an issue when the feature appears on production. Common issues that impact production are lagging site performance, bugs, and dips in business metrics.

Error Handling

We want to have some kind of help or response on our end for all possible user issues. Messages can include prompts to contact Customer Success or to try again.

Once a feature is in production, we run through workflows on Survey Junkie to ensure everything is working once again as expected. Testing takes between five minutes and eight hours and every engineer participates in the process. During refinement, the team includes test writing, creating test cases, and doing User Acceptable Testing (UAT) in their task estimates.

Software Engineers at DISQO are coders and testers, but they are also responsible for building viable products that operate in the real world. An experienced engineer knows how to do their job well, but can also understand the business-end of why they are building something. As the Product Manager of my team, I communicate the what and why of proposed new features. This frees up engineers to define how the feature will be built, and puts more emphasis on comprehensive testing.


William Wong is a Senior Product Manager at DISQO who leads the Purple Electric Penguins team. He works with stakeholders across various departments to improve Survey Junkie and execute the product roadmap. In his spare time, he spends time with his Bichon Poodle, Russell, and plays video games with his PEP teammates.

See more articles by William Wong.