Learn How to Test Your Code

A comprehensive guide for COSC 102 students

By Jonathan Cook, Haran Eiger, Hans Bressel

Testing Philosophy

Why

Have you ever been caught off guard by a bug with no clear origin or fix? The assignment is due in 20 minutes, panic sets in, and you know your grade is about to take a hit. Testing changes that.

By integrating test suites while writing your code, you create a clearer and more coherent understanding of what your program is doing, and when something goes wrong, you'll know exactly where to look.

Testing isn't just for catching mistakes, instead it gives you confidence that your code works as expected and helps document its intended behavior for yourself and others.

When

You should start testing as soon as you begin outlining your code. The moment you understand what your program is supposed to achieve is the moment you can begin designing meaningful tests.

Testing early lets you identify edge cases and confirm that the core logic works as intended while you build. That way, if you refactor or update your code later, your pre-written tests serve as a safety net, quickly catching any unintended changes before they become serious problems.

How

Ensure that you understand both the inputs and outputs of your code very clearly. Ask yourself: What are the boundaries of the input? What are the distinctions between different input categories?

Are all lines of code and all conditional branches being exercised by your tests? By consistently asking these kinds of questions, you'll define your input and output space thoroughly and ensure that every line of your code is doing exactly what it's supposed to.

Ready to Learn More?

Our 4-Step Testing Approach

The testing approaches detailed in this project, Boundary Value Testing (BVT), Equivalence Class Testing (ECT), and Line/Branch Coverage, are designed to directly target the most common loop and boundary errors students encounter in COSC 102. These techniques were intentionally ordered to build progressively: starting with simple boundary testing, moving to strategies that partition the input or output space, and culminating in ensuring full line and branch coverage. Each method adds a new layer of insight into how thoroughly your code is tested.

This sequence provides a structural sweep of your code, ensuring it's not only functionally correct but also robust against edge cases and overlooked logic. To minimize redundancy in test cases, we introduced slight variations across the techniques, allowing students to achieve high coverage with relatively low effort.

The 4 Steps

1

Robust Boundary-Value Tests

Filter valid vs invalid cases

2

Edge-Focused Equivalence-Class Test

Cover the valid outputs

3

Structural Coverage Sweep

Hunt for blind spots

4

Suite Refinement

Keep what matters, drop the rest

Ready to See It in Action?

Understanding Approach Limitations

Even the best testing approaches have constraints. Being aware of these limitations helps you supplement with other testing methods when necessary.

Coverage ≠ Completeness

Manual Effort Scaling

OOP Complexity

External Dependencies

Example 1: Car Simulation - Drive Method

In this example, we'll walk through the process of testing a car simulation's drive method using our 4-step approach.

0

Overview

This function is part of a larger program that models a MyCar class, an object-oriented simulation of a car's essential behaviors such as fuel consumption, mileage tracking, and gas refilling. The class includes attributes like fuel efficiency (mpg), gas tank size, total mileage, and gas price, and supports methods that simulate realistic interactions with a car, such as driving, refueling, and maintenance tracking.

The driveCar(double miles) method plays a central role in modeling how fuel is consumed when a car is driven. It checks whether the requested number of miles can be driven based on the car's current gas level and miles-per-gallon efficiency. If the request is valid, it updates the fuel level accordingly and returns true; otherwise, it denies the request and returns false. This simple check ensures that the simulation remains realistic, drivers can't magically exceed their car's fuel limits.

Testing this method is essential because it represents a core mechanic of the simulation: the link between fuel and distance traveled. If the function allows negative or overly long trips to be recorded, the simulation becomes unrealistic. By applying both Boundary Value Testing and Equivalence Class Testing, we can ensure that the driveCar() method behaves as expected across all relevant input ranges and scenarios, preserving both the integrity and believability of the simulation.

The Method We're Testing


public boolean driveCar(double miles){
    if (miles < 0 || miles > this.mpg*this.currentGas){
        return false;
    }
    this.currentGas = this.currentGas - (miles/this.mpg);
    return true;
}
                        
1

Step 1: Robust Boundary-Value Tests

We want to test the boundaries of the variable miles in this function. To do this, we apply the principles of Boundary Value Testing (BVT), which focuses on values at the very edges of a variable's allowed range—specifically where the program's behavior transitions from one state to another. Rather than merely identifying out-of-bounds errors, BVT is used to verify that logical conditions flip precisely when they're supposed to. These transition points might reflect shifts such as false to true, valid to invalid, or rejected to accepted.

In order to isolate the behavior of the miles parameter, we'll fix the other variables in the function to known values. Specifically, we set mpg = 20 and currentGas = 5, which means the maximum drivable distance is 20 * 5 = 100 miles.

Now we can systematically test values at and around the key thresholds. Because miles is a double, we can test with precise decimal values, not just whole numbers. This allows us to observe how small variations near those critical boundaries affect program behavior.

Test Cases Created in Step 1

Test Case Input (miles) Description Expected Result
BVT1 -0.1 Min - 1: Just below the allowed minimum FALSE
BVT2 0 Min: The exact lower boundary TRUE
BVT3 0.1 Min + 1: Just above the lower boundary TRUE
BVT4 99.9 Max - 1: Just below the upper boundary TRUE
BVT5 100 Max: The exact upper boundary TRUE
BVT6 100.1 Max + 1: Just above the allowed maximum FALSE
2

Step 2: Edge-Focused Weak Normal Equivalence-Class Test

Equivalence Class Testing (ECT) involves dividing input values into categories, or "classes," where all values in a class are expected to produce the same outcome. Rather than testing individual edge values like in BVT, ECT helps ensure the function behaves correctly across broader types of inputs. In the case of driveCar(double miles), we again assume mpg = 20 and currentGas = 5, which means the car can drive a maximum of 100 miles.

We can identify the following equivalence classes for the miles input:

  • EC1: Negative input values. Any value less than 0 should be considered invalid and return false.
  • EC2: Valid range values. Any value greater than or equal to 0 but less than or equal to 100 should return true.
  • EC3: Values exceeding the fuel range. Any value greater than 100 should be considered too far to drive and return false.

To ensure representative coverage across the equivalence classes and validate transitions between them, we design the following four test cases:

Test Cases Created in Step 2

Test Case Input (miles) Transitions Between Classes Expected Output
ECT1 -10 Within EC1 (invalid input) FALSE
ECT2 0.1 Transition from EC1 → EC2 TRUE
ECT3 99.9 Within EC2 (valid driving range) TRUE
ECT4 150 Transition from EC2 → EC3 FALSE

These test cases verify that the function correctly interprets inputs from within each behavioral region and handles the transitions across class boundaries appropriately.

ECT is most effective when used after BVT has exposed where those behavioral boundaries lie. While BVT stresses the inputs right at the turning points—such as 0.0, 100.0, or just beyond, ECT steps back and ensures each logical region on either side of those boundaries is being properly represented and covered.

In this way, building one strategy actually helps inform the construction of the other: BVT pinpoints exactly where logic transitions occur, and ECT maps out the broader regions those transitions define. This allows you to test thoroughly without redundancy, using BVT to confirm precision and ECT to ensure consistent coverage across meaningful input classes.

3

Step 3: Structural Coverage Sweep

Using JaCoCo to analyze our code coverage, we can verify that our test cases from the previous steps provide complete coverage of the driveCar method.

The JaCoCo report confirms that our BVT and ECT test cases fully exercise all lines and branches in the method:

JaCoCo coverage report for driveCar method

As shown in the report, this function is very simple and JaCoCo confirms that it is fully covered by our BVT and ECT test cases. The coverage analysis shows that we've exercised both the true and false conditions of the if statement, ensuring that all execution paths have been tested.

4

Step 4: Suite Refinement

For this simple example, no significant refinement is needed. The driveCar method has straightforward logic with clear boundaries and equivalence classes that we've already covered in our BVT and ECT test cases.

Here is our complete test suite for the driveCar method, organized by test type:

Test ID Test Type Input (miles) Description Expected Result
BVT1 Boundary Value -0.1 Just below minimum FALSE
BVT2 Boundary Value 0 Exact lower boundary TRUE
BVT5 Boundary Value 100 Exact upper boundary TRUE
BVT6 Boundary Value 100.1 Just above maximum FALSE
ECT1 Equivalence Class -10 Invalid negative input FALSE
ECT3 Equivalence Class 50 Valid mid-range input TRUE
ECT4 Equivalence Class 150 Invalid too large input FALSE

JaCoCo confirms we've achieved 100% code coverage, and our tests are minimal with minimal redundancy. Each test case serves a specific purpose, and together they provide complete coverage of the method's behavior.

Example 2: Fantasy Garden Plant Growth

In this example, we'll test a more complex function with state-dependent behavior and multiple conditions using our 4-step approach.

0

Function Context

This function comes from the Fantasy Garden project in COSC 102, where students simulate plant growth based on environmental conditions. The simDailyGrowth(int temp, int weather) method models a plant's daily growth depending on both current weather and temperature inputs, but also internal state such as whether the plant is wilted and how many consecutive chilly days have occurred. The function includes several conditional branches and makes use of constants defined in the FantasyGarden class:


// Temperature types
public static final int TEMP_CHILLY = 0;
public static final int TEMP_WARM = 1;
public static final int TEMP_HOT = 2;
// Weather types
public static final int WEATHER_SUNNY = 0;
public static final int WEATHER_CLOUDY = 1;
public static final int WEATHER_RAINY = 2;
                    

The growth logic is influenced by whether the plant is already wilted (isWilted()), if the day is chilly, and whether the weather is cloudy or not. If specific chilly and cloudy conditions repeat, the plant may wilt. If the day is suitable, it grows by incrementing previousGrowth. If not, it might reset the growth counter or even return zero. This function illustrates both the challenge of managing internal object state and the complexity introduced when conditions depend on multiple variables interacting.

The Function We're Testing


public int simDailyGrowth(int temp, int weather){
    if (isWilted()){ //if wilted
        return 0;
    }
    if (chillyCounter == 1 && temp == FantasyGarden.TEMP_CHILLY){
        wilt(); //wilted condition
        return WILTED_VALUE;
    } else if (weather != FantasyGarden.WEATHER_CLOUDY){
        chillyCounter = 0;
        previousGrowth += 1;
        return (grow(previousGrowth));
    } else if (chillyCounter != 1 && weather == FantasyGarden.WEATHER_CLOUDY && temp == FantasyGarden.TEMP_CHILLY){
        previousGrowth = 1;
        chillyCounter = 1;
        return (grow(previousGrowth));
    } else if (weather == FantasyGarden.WEATHER_CLOUDY && temp != FantasyGarden.TEMP_CHILLY){
        previousGrowth = 1;
        chillyCounter = 0;
        return grow(previousGrowth);
    }
    return 0;
}
                        
1

Step 1: Robust Boundary-Value Tests

In this function, boundary value testing isn't about numeric min/max thresholds. Instead, we're testing the logic around conditional transitions, the exact points where the function's behavior flips due to changes in temperature, weather, or internal state. These boundaries may not look like traditional math, but they represent the edges of behavioral change. For example, the plant's transition from healthy to wilted is not determined by a simple number, it's the combination of a specific internal state (chillyCounter == 1) and a triggering input (temp == TEMP_CHILLY).

BVT is used here to deliberately probe those transitions: when a plant starts to grow, when it resets its growth cycle, or when it shuts down altogether. We don't need to try every possible input; instead, we construct targeted test cases that sit precisely at those flip points. These are the decision thresholds where the program's return value will change based on just one piece of input. For clarity, we'll use the numerical values (0, 1, 2) rather than the constant names in our test cases.

For example:

  • BVT2 tests the moment when the plant switches from being on the edge of wilting to actually wilting. This is only triggered under very specific conditions (counter = 1 and temp = 0, which is the TEMP_CHILLY constant in the code).
  • BVT4 confirms that non-chilly temperatures on cloudy days do not cause wilting, helping isolate the logic that distinguishes these similar but subtly different paths.
  • Each test case uses a fresh Plant instance to avoid state contamination between tests, which was critical for tests BVT3 and BVT5.

Test Cases Created in Step 1

Test Case temp weather Internal State Description Expected Output
BVT1 0 1 wilted=true Already wilted plant 0
BVT2 0 0 counter=1, fresh Plant Repeated chilly days trigger wilt WILTED_VALUE
BVT3 0 1 counter=0, fresh Plant Cloudy + chilly sets up counter = 1 grow(1)
BVT4 1 1 counter=0, fresh Plant Cloudy + warm resets counter = 0 grow(1)
BVT5 1 2 counter=1, previousGrowth=1, fresh Plant Rainy + warm day, increments growth grow(2)

This focused approach lets us quickly confirm that all logical gates in the function work properly without needing a bloated test suite. It emphasizes precision over repetition.

2

Step 2: Edge-Focused Weak Normal Equivalence-Class Test

Here we categorize input scenarios that result in consistent behavior and group them into equivalence classes. These categories help simplify the testing process by covering more ground with fewer cases. Instead of trying every single permutation of temp, weather, and internal state, we select one representative input for each class that exhibits a unique behavior pattern. This function presents a classic case where internal state influences outcomes just as much as input parameters. For example, a test where temp = TEMP_CHILLY and weather = WEATHER_CLOUDY might result in different outcomes depending on whether chillyCounter is 0 or 1. That's why our equivalence classes take both current inputs and the plant's internal memory into account.

Each test here is chosen to represent a class of behavior, not just a unique line of code:

  • EC1 is a hard stop: once a plant is wilted, nothing else matters, it always returns 0.
  • EC3 and EC4 both deal with cloudy days, but the difference in temperature causes very different logic to execute. These are tested separately because they reflect two distinct branches of the function.
  • EC5 captures all non-cloudy weather outcomes where growth should continue, offering a broader behavioral region that's consistent in logic.

Equivalence Classes for simDailyGrowth

Class ID Description Input/State Conditions Expected Behavior
EC1 Already wilted plant isWilted() == true Return 0 (no growth)
EC2 Plant about to wilt chillyCounter == 1 & temp == 0 (TEMP_CHILLY) Return wilted value
EC3 First chilly, cloudy day chillyCounter == 0 & weather == 1 (CLOUDY) & temp == 0 (CHILLY) Set chillyCounter = 1, Return grow(1)
EC4 Cloudy but not chilly weather == 1 (CLOUDY) & temp != 0 (not CHILLY) Reset chillyCounter, Return grow(1)
EC5 Not cloudy weather weather != 1 (not CLOUDY) Reset chillyCounter, grow by previousGrowth+1

Test Cases Created in Step 2

Test Case Input Values Equivalence Class Expected Result
ECT1 Any values, wilted = true, fresh Plant EC1 0
ECT2 temp = 0, weather = 0 (WEATHER_SUNNY), counter = 1, fresh Plant EC2 WILTED_VALUE
ECT3 temp = 0, weather = 1, counter = 0, fresh Plant EC3 grow(1)
ECT4 temp = 1, weather = 1, counter = 0, fresh Plant EC4 grow(1)
ECT5 temp = 1, weather = 2, previousGrowth = 3, fresh Plant EC5 grow(4)

By clearly separating these equivalence classes and testing only once per class, we avoid redundancy while still ensuring total behavioral coverage. It's a high-efficiency way to confirm that each type of scenario behaves as expected without overwhelming the test suite with duplicates.

3

Step 3: Structural Coverage Sweep

After running our BVT and ECT tests through JaCoCo, we found that there was still one line of code that wasn't being executed at all (shown in red in the JaCoCo report):

  • The final return 0 statement at the bottom was never executed (red line)
  • This represents an important edge case in our ECT that we missed in our original analysis
  • While there was also yellow highlighting for some conditions, we're primarily focused on achieving line and branch coverage at this stage, not condition coverage

To achieve complete line and branch coverage, we added a specific test to target the red line (the uncovered fall-through path):

Test ID Inputs Internal State Purpose Expected Result
SC1 temp=0, weather=1 counter=1, fresh Plant Test cloudy+chilly with counter=1 fallthrough path 0

The JaCoCo report now confirms that with our complete test suite, we've eliminated all red lines, achieving full line and branch coverage for the simDailyGrowth method. This test case was essential because it represents a missed equivalence class in our earlier analysis - a case where the plant has a chillyCounter of 1 on a cloudy, chilly day but doesn't wilt:

JaCoCo coverage report for simDailyGrowth method

As shown in the report, all lines of code are now exercised, with no red lines remaining. Upon rerunning JaCoCo with our updated test suite, the previously missed return line (line 671) turned green. While some conditions may still show yellow highlighting (indicating that not all possible boolean combinations were tested), we've achieved our primary goal of ensuring every line of code and every branch is exercised at least once. This highlights why JaCoCo is such a valuable final check - it revealed a meaningful edge case that our ECT analysis had overlooked.

4

Step 4: Suite Refinement

After examining our complete test suite, we can make the following refinements:

  1. Eliminate redundancy: EC1-A duplicates BVT Case 1, so we can remove one of them
  2. Combine similar cases: EC2-A and BVT Case 3 test the same condition, so we can merge them
  3. Prioritize the most revealing tests: The wilting transition tests are particularly important

Here is our complete comprehensive test suite for the simDailyGrowth function, organized by test type and showing the priority of each test:

Test ID Test Type Description Inputs Priority
BVT1 Boundary Value Already wilted plant behavior wilted=true, any temp/weather, fresh Plant Medium
BVT2 Boundary Value Wilting transition temp=0, weather=0, counter=1, fresh Plant High
BVT3 Boundary Value First chilly+cloudy day temp=0, weather=1, counter=0, fresh Plant Medium
BVT4 Boundary Value Cloudy + warm resets counter temp=1, weather=1, counter=0, fresh Plant Medium
BVT5 Boundary Value Growth increment temp=1, weather=2, counter=1, previousGrowth=1, fresh Plant High
ECT1 Equivalence Class Already wilted plant wilted=true, any temp/weather, fresh Plant Medium
ECT2 Equivalence Class Plant about to wilt temp=0, weather=0, counter=1, fresh Plant Medium
ECT3 Equivalence Class First chilly cloudy day temp=0, weather=1, counter=0, fresh Plant Medium
ECT4 Equivalence Class Cloudy but not chilly temp=1, weather=1, counter=0, fresh Plant Medium
ECT5 Equivalence Class Not cloudy weather temp=1, weather=2, previousGrowth=3, fresh Plant Medium
SC1 Structural Coverage Cloudy+chilly with counter=1 fallthrough temp=0, weather=1, counter=1, fresh Plant Low

This refined suite balances thorough coverage with efficiency, focusing on the most important behavioral aspects while still ensuring all code paths are tested at least once. By eliminating redundant tests and prioritizing the most critical scenarios, we've created a test suite that efficiently validates the simDailyGrowth function's behavior across all its logical boundaries.

Example 3: Wordle-style Game Mechanic

In this example, we'll apply our 4-step approach to test a function that supports a Wordle-style game mechanic, which compares guessed letters with a secret word and provides color-coded feedback.

0

Function Context

This function supports a Wordle-style game mechanic. It compares the guessed letter at index j in the current row with the secret word's character at the same index. If the characters match exactly, the tile is colored green and the rightSpot counter is incremented. If the guessed character exists elsewhere in the secret word (but not at the current index), the tile is colored yellow unless it's already marked green. If the guessed character is not in the secret word at all, the tile is colored gray.

The Function We're Testing


public static int checkLetters(int j, int rightSpot){ //compares the guessed word to the secret word
    char wordLetter = GameGUI.getSecretWordArr()[j]; //gets letter from answer word
    char guessLetter = GameGUI.getGridChar(currentRow, j); //gets letter in the spot
          if (wordLetter == guessLetter){ //colors in boxes if leters are equal
                GameGUI.setGridColor(currentRow, j, CORRECT_COLOR); //sets color to green
                GameGUI.setKeyColor(wordLetter, CORRECT_COLOR);
                rightSpot += 1; // is this guess correct for the secret at this spot, if it is then we color and are done, if it is not then we need to differentiate between yellow and dark gray
             }else{
                for (int i = 0; i< MAX_COLS; i++){
                   if (guessLetter == GameGUI.getSecretWordArr()[i]){
                      GameGUI.setGridColor(currentRow, j, WRONG_PLACE_COLOR);
                      if (GameGUI.getKeyColor(guessLetter) != CORRECT_COLOR){
                         GameGUI.setKeyColor(guessLetter, WRONG_PLACE_COLOR);
                      }
                   } else if (GameGUI.getKeyColor(guessLetter) != WRONG_PLACE_COLOR && GameGUI.getKeyColor(guessLetter) != CORRECT_COLOR){
                      GameGUI.setKeyColor(guessLetter, WRONG_COLOR);
                   }

                   }
                }
             return rightSpot;
          }
                        
1

Step 1: Robust Boundary-Value Tests

This problem has two dimensions that influence the function's behavior: the index j and the guessed word itself. To thoroughly apply Boundary Value Testing, we isolate each of these variables and test how the logic transitions as we move across their respective boundaries. First, we hold the index constant and vary the guessed letter to reveal how matches, near-matches, and mismatches affect the outcome. Next, we hold the guessed word constant and vary the index to examine how the same characters behave across different positions in the word.

This two-dimensional BVT approach demonstrates how we can apply boundary testing not only across numeric thresholds, but across distinct control flow transitions, validating correctness from multiple interacting angles.

Equivalence Classes for checkLetters

Class Condition Description Expected Behavior
EC1 guess[j] == secret[j] exact match green, increment rightSpot
EC2 guess[j] != secret[j] && guess[j] in secret in word but wrong position yellow, rightSpot unchanged
EC3 guess[j] not in secret completely incorrect gray, rightSpot unchanged

These values represent boundaries between green → yellow → gray transitions, based on exact character alignment and word inclusion. We keep the index constant (j = 0) and vary only the guessed letter to examine how the output behavior changes across key logical transitions: an exact match, a letter in the word but in the wrong position, and a letter not in the word at all.

This isolation allows us to precisely identify the boundaries that shift the program from one behavior (green coloring and incrementing rightSpot) to another (yellow or gray coloring with no increment). Boundary Value Testing is not limited to numeric comparisons; it can be extended to categorical logic, such as character string comparisons and boolean condition transitions, wherever a meaningful behavioral boundary exists.

Test Cases Created in Step 1 - Dimension 1: Character Variation

For these tests, we hold the index constant (j = 0) and vary only the guessed letter to examine how the output behavior changes across the key logical transitions.

Test ID Secret Word Guessed Word Index (j) Expected Behavior Boundary Being Tested
BVT1 "CRANE" "CRASS" 0 Green, rightSpot++ 'C' exact match (EC1)
BVT2 "CRANE" "EATER" 0 Yellow, rightSpot unchanged 'E' in word but wrong position (EC2)
BVT3 "CRANE" "PRANK" 0 Gray, rightSpot unchanged 'P' not in word (EC3)

Test Cases Created in Step 1 - Dimension 2: Position Variation

We then shift focus to the second dimension, position. By keeping the guess word constant and varying the index j, we test how the function applies the same comparison logic across multiple positions in the word. Using a partially overlapping word like "CRASS" against the secret word "CRANE" allows us to track how the same letter behaves differently depending on its position.

Test ID Secret Word Guessed Word Index (j) Expected Behavior Boundary Being Tested
BVT4 "CRANE" "CRASS" 0 Green, rightSpot++ 'C' at correct position (EC1)
BVT5 "CRANE" "SCARS" 1 Gray, rightSpot unchanged 'C' at wrong position, but not in word at that position (EC3)
BVT6 "CRANE" "TEACH" 2 Green, rightSpot++ 'A' exact match at position 2 matching 'A' in "CRANE" (EC1)

To make this truly reflect Boundary Value Testing, we observe how the same guessed letter produces different behavior at different positions. For example, the character 'C' appears in both guess words, "CRASS" and "SCARS", but at different indices. In "CRASS", it aligns with the same position in the secret word (index 0), triggering a correct match (green). In "SCARS", the same 'C' appears at index 1, which does not match the corresponding character in the secret word, resulting in gray. This demonstrates that the function's output is not solely determined by the presence of the letter but also by its position.

2

Step 2: Edge-Focused Weak Normal Equivalence-Class Test

While BVT focuses on finding the precise transition points in logic, ECT helps validate the consistency of behavior within those broader regions. It is most effective when used after BVT has mapped out where those boundaries occur. For example, once BVT shows that the function transitions from green to yellow when the guessed letter changes or moves, ECT ensures that any value that falls into the "green" or "yellow" category behaves as expected, regardless of the specific input.

In the case of checkLetters, BVT helps identify that the output changes based on character match and position. ECT then groups those outcomes into logical classes—exact match, partial match, and no match—and checks whether the function treats all members of each class consistently. This layered approach reduces redundancy while improving coverage: BVT highlights the edges, and ECT confirms the rules within each region.

  1. EC1: Exact Character Match - The guessed letter matches the secret letter at the same position
  2. EC2: Character in Word but Wrong Position - The guessed letter exists in the secret word but not at the current position
  3. EC3: Character Not in Word - The guessed letter does not appear anywhere in the secret word

Now we'll create test cases that represent each equivalence class, ensuring that all types of character matches are properly handled by the function.

Test Cases Created in Step 2

Test ID Secret Word Guessed Word Description Equivalence Class Expected Behavior
ECT1 "BRAVE" "BROKE" First two letters match exactly EC1 (for j=0, j=1) Green for 'B' and 'R', rightSpot+=2
ECT2 "BRAVE" "EARNS" Contains 'A', 'E', 'R' in wrong positions EC2 (for multiple j) Yellow for 'A', 'E', 'R', rightSpot unchanged
ECT3 "BRAVE" "CLOCK" Contains no letters from the secret word EC3 (for all j) Gray for all letters, rightSpot unchanged
ECT4 "BRAVE" "VALVE" Mixed case: 'V' and 'E' match at right position, 'A' exists but wrong position Mix of EC1 and EC2 Green for 'V' and 'E', yellow for 'A', gray for 'L', rightSpot+=2
ECT5 "SPEED" "SPELL" Multiple instances: 'S','P','E' match at positions 0,1,2, second 'E' (in SPEED) not matched Multiple instances of EC1 Green for 'S', 'P', 'E', gray for 'L', 'L', rightSpot+=3

By using ECT to sweep across each class, we confirm that the function applies the same logic to any input that falls within a class—ensuring reliability and coherence in its behavior. This approach provides broad coverage while keeping our test suite manageable.

3

Step 3: Structural Coverage Sweep

After running our BVT and ECT tests through JaCoCo, we analyzed the coverage report to determine if additional test cases were needed:

JaCoCo coverage report for checkLetters method

The JaCoCo report shows that all lines in the checkLetters method are green, with only two conditional statements showing yellow highlighting. Since the lines are yellow rather than red, we know that both paths of these conditionals are being tested, just not all possible combinations of conditions.

For this method, we're not concerned with achieving the full level of rigor in testing all conditional combinations, as it would significantly increase the number of test cases without proportional benefit. The yellow highlighting represents minor variations in how certain edge cases are handled, particularly related to keyboard color assignments that have already been set.

The coverage report confirms that our BVT and ECT approach was sufficient in testing the checkLetters method, as all execution paths are being exercised at least once. This validates our testing strategy and demonstrates that focusing on meaningful boundaries and equivalence classes is an efficient approach to achieve comprehensive testing without creating an excessive number of test cases.

4

Step 4: Suite Refinement

After examining our complete test suite, we identified opportunities to optimize our testing approach while maintaining comprehensive coverage:

  1. We want to retain tests that verify each of the three main color outcomes (green, yellow, gray)
  2. We need to keep both the single-character tests (BVT) and the multi-character tests (ECT)
  3. We want to ensure tests across different positions to verify index-based behavior

Here is our refined comprehensive test suite for the checkLetters function, organized by test type:

Test ID Test Type Secret Word Guessed Word Description Priority
BVT1 Boundary Value "CRANE" "CRASS" Exact match at position 0 (green) High
BVT2 Boundary Value "CRANE" "EATER" Letter in word but wrong position at index 0 (yellow) High
BVT3 Boundary Value "CRANE" "PRANK" Letter not in word at index 0 (gray) High
BVT5 Boundary Value "CRANE" "SCARS" Letter not in word at that position for index 1 (gray) Medium
BVT6 Boundary Value "CRANE" "TEACH" Exact match at position 2 (green) High
ECT1 Equivalence Class "BRAVE" "BROKE" Multiple exact matches (green) Medium
ECT2 Equivalence Class "BRAVE" "EARNS" Multiple letters in wrong positions (yellow) Medium
ECT3 Equivalence Class "BRAVE" "CLOCK" Complete miss case (all gray) Medium
ECT4 Equivalence Class "BRAVE" "VALVE" Mixed case: exact matches and wrong position in one word High
ECT5 Equivalence Class "SPEED" "SPELL" Multiple instances: handling of repeated letters Medium

We've dropped BVT4 as it provided similar coverage to BVT1, testing an exact match but at a different position. Since we've retained BVT6 which tests an exact match at position 2, we have sufficient position variation coverage without BVT4.

This refined suite balances thorough coverage with efficiency, focusing on the most important behavioral aspects while still ensuring all code paths are tested at least once. By organizing our tests in this way, we can quickly see which aspects of the function are most thoroughly tested and which critical behaviors are verified. The high-priority tests capture the essential transition points between the three color states (green, yellow, gray), while the medium-priority tests ensure complete coverage of edge cases.

The checkLetters function demonstrates how our test approach is able to handle more complex code with conditional logic, loops, and state changes. The simple yet robust principles of Boundary Value Testing (BVT) and Equivalence Class Testing (ECT) scale well to this complexity by focusing on the key transitions in behavior rather than exhaustively testing every possible input combination. This approach yields comprehensive test coverage with a manageable number of thoughtfully designed test cases.

Advanced Testing Tools

Setting up a robust testing environment is essential for effective test-driven development. Learn how to leverage Maven and JaCoCo to streamline your testing workflow.

Maven & JaCoCo: Your Complete Testing Toolkit

Maven is the build engine that fetches your dependencies, compiles your code, runs every JUnit test, and—thanks to the bundled JaCoCo plug‑in—spits out a line‑by‑line coverage report in HTML. Getting that toolchain wired up from scratch can be a headache, but don't worry, we've got you covered.

Quick Start Guide

1. Clone the Repository

Clone the ready‑made harness at https://github.com/jon-cook1/102-coverage into the folder that sits next to your lab.

git clone https://github.com/jon-cook1/102-coverage
2. Set Up the Testing Environment

Run the setup script once to configure the testing environment for your lab.

./setup_tests.sh ../YourLabFolder
3. Run Your Tests

After the initial setup, just call the run script whenever you want fresh results.

./run_tests.sh
4. Testing a Different Project

Need to test a different project? Just rerun the setup script with the new folder path.

./setup_tests.sh ../NewLabFolder

Note: Changing labs wipes the tests already inside the harness, because this beta is meant as a quick‑start demo rather than a permanent test repository.

How It Works

The 102-coverage harness automates the entire testing process:

  • Configures Maven to work with your Java project
  • Sets up JUnit test execution
  • Integrates JaCoCo for code coverage analysis
  • Generates HTML reports showing your test coverage
  • Provides a simple interface to run and view test results

Full step‑by‑step instructions are available in the repo's README.

Maven

Industry-standard build tool that manages dependencies, compiles code, and runs tests with a simple command.

JaCoCo

Code coverage library that shows exactly which lines of your code are tested and which are not, with intuitive visual reports.

Ready to Improve Your Test Coverage?