He just turned 5 months old and is now a very happy baby. But he was not always very happy. You wouldn’t believe it from this picture, but Colton has gone through more in his first 5 months of life than most adults have in decades.
Back in September, Colton’s mother learned that she would be expecting her second child, Colton. Unfortunately, routine blood work revealed that the baby was at high risk for a serious chromosomal disorder called Edwards Syndrome (or Trisomy 18). Most fetuses with this condition do not make it to term or, if they do, they usually don’t live more than a month after birth.
In a small glimmer of hope, we met with a genetics counselor who said that the test did not have enough blood to give true results. But later, at 13 weeks, an ultrasound showed signs of Trisomy 18 but the doctor was not able to 100% determine the diagnosis. That day, Colton’s mother was presented with two other genetic testing options that had 99% accuracy of determining if the baby did, in fact, have this defect.
She decided to do a chorionic villus sampling (CVS) test in which the doctor uses ultrasound to help stick a needle through the abdomen to the placenta to draw a sample of tissue and blood. In addition to the pain the mother suffers during this procedure, it also carries a 1-in-100 chance of miscarriage. The test is 99% effective in determining a genetic disorder, with a 1% chance of having a false positive result. Results come back within a few days, as opposed to a few weeks with the other test.
After a very long weekend of stress, Colton’s mother received a call from the doctor. Results from the CVS test came back negative. The baby was determined to not have Trisomy 18. She was so relieved. But why was she put through all this stress? Why did the first blood test not produce proper results? Was it a quality test? More importantly, can a test that does not have enough information to give a proper result report any results at all?
When we test software, we ask ourselves many of these same questions. Are our test cases quality? Do they have enough information and acceptance criteria to be executed properly? Do we have expected results? This is testing 101.
In the case of Colton’s mother, not having enough blood, or acceptance criteria, should have been a red flag. That test should have failed immediately and never made it to a decision stage. Her stress was a direct result of poor testing.
Maybe this scenario is considered an edge case (although highly unlikely given what the test is used for) and was missed during regression testing? Maybe this type of test does not have direct inputs and outputs. Maybe it’s based on machine learning. In this case, there still should be some confidence that comes with the results, but she was never given a confidence score in her blood work results. Is it quality to not give the patient the test’s confidence in how well the test performed? If a quality assurance team told its stakeholders, “Yeah, our tests passed. Regression looks good,” without providing metrics for what exact tests passed/failed or showing the results, that stakeholder’s trust would not last long. There need to be metrics stored somewhere for checks and balances.
Ecstatic that her baby did not have a fatal diagnosis, Colton’s mother returned to the doctor who performed her CVS test to see the baby’s progress. This time, the doctor was having trouble seeing all four chambers of the heart. During the initial ultrasound, the doctor had pointed out that the heart did not look right but it was also a little bit early at the time. So, this wasn’t shocking news. That doctor referred Colton’s mother to one of the best hospitals in the country that specializes in pre-birth congenital heart defects.
Before we continue Colton’s and his mother’s journey, let us step back and ask ourselves if this doctor was quality? He was absolutely concerned that the baby might have Trisomy 18. He didn’t jump to conclusions and tell Colton’s mother that she should terminate right away. He didn’t force her to do other invasive testing any earlier than it needed to be done. He waited and took the proper steps toward trying to accurately diagnose the problem. He ended up referring her to someone that knew much more about the heart than he did.
Sometimes in software testing, we see that a feature is not working 100% properly but don’t know exactly why. We end up looking at console errors, network logs, database tables, etc. to try to see where functionality failed. In some cases, we don’t have the error staring us in the face, and we need to do some digging or seek an opinion from another tester or developer who might be able to see what we don’t. Or we seek somebody who has more experience in a certain area to be able to give us better answers. This is quality. We can’t just say that a feature works because we don’t know exactly what is wrong but it works functionally in one area while there are some side effects or bugs in other areas. That is not quality.
A week later, Colton’s mother sought a second opinion regarding Colton’s heart. After an extensive echocardiogram, she met with the director of the hospital’s fetal heart program, as well as a social worker, nurse, administrative assistant, and fellow doctors in a round table discussion about Colton’s heart. It was determined at that time that Colton did, in fact, have a missing chamber of the heart, and confirmed what the ultrasound doctor had seen—a congenital heart defect called Hypoplastic Left Heart Syndrome (HLHS) that affects approximately 960 kids newborns each year in the United States.
The doctors assured Colton’s mother that they could “fix” him and give him a chance at a “normal” life. They gave her their plan of attack and allowed her to make a decision on how she would like to proceed. In addition to three surgeries, there were many research studies that were available for Colton and his mother to participate in. Some of them could have a direct impact on Colton. Some of them would not have an impact on him but potentially on other babies in the future. She decided to trust the doctors and see them every month until her due date. At which time, she planned to deliver in the hospital, so that doctors could provide medication and prepare the newborn for surgery, which was scheduled to happen only a few days after delivery. In their eyes, this plan reduced the baby’s risk.
Colton’s mother believed to be in the hands of quality. Doctors who seemed to actually care about her and her son’s well being provided her with all of her options and information they could give. And online research showed only good reviews about this hospital and the team of doctors. She trusted this team of experts.
In software development, we have to trust the leads, architects, and principal engineers that they can, in fact, build something that will work efficiently and effectively and that will be scalable upon new features and requests from the customer.
Colton’s mother put her trust in a doctor who had a bounty of degrees and awards, someone who leads conferences in his field, someone who is at the heart of research, and someone who was on the team that helped to produce one of the staged procedures.
Three weeks prior to her due date, Colton’s mother went for her checkup at the hospital. Everything was on track, so she went home. But later that day, she was back in the hospital and, within two hours, Colton was born. Upon delivery, he was taken to be prepared for surgery. Even though the surgery wouldn’t happen for a few days, they needed to get IV’s and arterial lines inserted to administer medications that would keep him stable until his operation.
So what happened here? Why did Colton arrive so soon? Was there something that should have been done at the appointment that would indicate Colton’s early arrival? Would they have checked her cervix, even though she was still weeks away from her due date? Maybe she was dilated that morning? Maybe the doctor would have seen that and asked her to stay? A lot of what-if’s here. Most likely, even if she were dilated a bit, they would still send her home without having contractions. Was this quality care? In the eyes of the hospital, it probably was. In the eyes of Colton’s mother, it wasn’t. She would rather not have to make a gruelling 90-minute drive to the hospital while in labor. This is the main reason she had scheduled to induce Colton on a certain day.
If we were to run regression tests before all of the features were completed, and found test failures, do we call them out? It’s a bit of a gray area. On one hand, yes, something is not working correctly. On the other hand, the test might not be working because certain functionality is not implemented or merged yet. I’d rather have run the test, seen the failure, and noted that failed test until after the feature was complete and the regression was run again. At least I would have something to compare the results to.
At the time of the appointment, Colton’s mother thought they would have at least checked, even though the chance was slim that they would find her cervix to be dilated. At least they would be able to say that she was dilated this morning and her returning eight hours later would make sense.
Before long, Colton was hooked up to many IV lines, had EKG stickers all over his chest, was under lights for jaundice, and had a nasal cannula providing oxygen. As his breathing became more rapid, the doctors and nurses decided it would be best to move his surgery up a day. It was originally scheduled for Tuesday, five days after birth, which is normal for babies with this condition. So, why not let him hold off another day? Was his breathing that bad? Was this a quality move?
The way a hospital conducts its status meetings—or “standups” in the Agile world—is known as rounds. Doctors on call that day come around to each room with their rolling computer desk, along with all other stakeholders or personnel. The way they conduct their rounds is a sight to see. Every person in attendance takes part. The nurses read back daily stats to the doctors. Other doctors chime in on the plan for the day. This is all done in about 10-15 minutes or so. Parents are encouraged to take part in these rounds and to make sure they hear exactly what is going on.
This time, the doctors and supporting staff thought it might not be safe to hold the surgery off another day as Colton’s breathing kept worsening. Was this quality? Was 15 minutes enough to talk about a life-or-death situation? If we had questions, the doctors answered them. If we had concerns, they assured us of the plan to address them. They made us feel like part of the team.
Being on a team where members of QA are embedded in the project team makes such a huge difference. I’ve been on teams in the past where QA is a shared resource on multiple projects, which means that QA may not know the development team or the product that well, and they can’t be seen as a stakeholder. At SemanticBits, QA is embedded and an integral part of the process. Developers count on QA to call out their human mistakes or misunderstanding of requirements. So, in standups, if something major is happening that day or there are questions, we are there to answer them. And, just like with a doctor’s rounds, 15 minutes is usually enough.
The surgery went well, Colton’s breathing issues subsided, and he was on his recovery path. A month later, Colton was discharged and subject to bi-weekly check ups with his pediatrician and cardiologist to ensure that nothing worsened while we all waited for him to grow bigger and stronger for the next surgery in four months.
Did he need check ups so often? Or would monthly be enough? In such cases as Colton’s, changes in heart function could happen quickly. The first surgery was only meant to hold him over until he was bigger and stronger for the next round, which would put more of a permanent solution in place. Still considered a patch in my eyes, it gave him more time to grow, the doctors more time to learn his anatomy, and more time for his anatomy to dictate the next move. For the doctors, it was a chance to see him every two weeks to ensure his heart was functioning properly, to make sure the shunt they put in did not clot too much, and to provide another set of eyes checking his oxygen levels, heart rate, and blood pressure.
Beyond Scrum and Sprint meetings, some software projects have weekly meetings to make sure things are on track. They serve a purpose. They aren’t needed every day, but once a week or every two weeks is just enough to keep the train moving on the tracks.
Four months later, Colton was 10 pounds heavier and back at the hospital for pre-testing for his second surgery. This consisted of routine blood tests and a cardiac catheterization for the surgeon and other doctors to get a closer look at his anatomy before they opened him up again for surgery. After the catheterization, as he came off sedation, Colton’s oxygen levels were lower than doctors liked. This wasn’t the first time it happened, and his numbers were borderline normal, but the doctors wanted to be cautious. Instead of sending him home for the weekend to rest for Monday’s surgery, they felt it best if he stayed in the hospital to be monitored. Was keeping him in the hospital for two extra days quality? Given his lower-than-normal oxygen levels, should his surgery have been sooner?
This is like a planned release. We do some regression testing, performance testing, see how things are working before we cut the release, and then release it into the wild for customers to use. In Colton’s case this is no different. Performing the smoke test (blood test) was no issue. The performance test (catheterization) was mostly positive, but decreased the body’s function in some way that wasn’t always explainable. Maybe the database has grown too big and has slower response times because it is taking longer to query the data. Maybe we aggravated the API service and it’s just a little bit slower to respond. Maybe this was a known issue and the fix or feature that is scheduled for a later release will patch all of this up. Do we hold up the release or address the problem after it’s in our customer’s hands?
The doctors caution wasn’t excessive. It was the quality move, just as their care from the time Colton’s mother was only 13 weeks pregnant was quality, if not necessary.
Quality is at the heart of what we do at SemanticBits. While we might not directly save lives like Colton’s, our impact on the healthcare field is huge, indirectly affecting the lives and care of millions of Americans. As we continue to implement more AI and machine learning, data like the personal health information of kids like Colton will be an integral part of potentially finding the cause for conditions like HLHS. Every time we go to write code or test a new feature, we handle it with care, take our time and do it right. We are nitpicky over every detail, every variable name, every test step. Those things make the difference in quality. It all matters—the quality of code, quality of tests, quality of the application, quality of the company, quality of the end users, quality of the providers, quality of the beneficiaries. I’m proud to be able to do what I love while helping provide quality care for my son and contribute to the research that helps other kids like him find treatments and cures.
The hospital that my family is associated with is doing everything it can to detect these heart defects earlier. This will help to determine how to address the situation, how to educate parents, and how to educate themselves in terms of research and discovering reasons why this happens. In software, this is quality assurance shifting left. We strive to test sooner, find bugs sooner, patch them quicker, before they get so impactful that the release in question is compromised. The sooner we all shift left, the sooner we make an impact on quality and make the quality of lives around us better.