Test-Driven Development: It's Tedious Until It's Not
Background
I've worked on a variety of software engineering projects and one thing that I have not really deeply dived into is the world of unit testing. Sure, I have some experience in writing unit tests, but I've mostly written them when the production code is already there.
I know that there exists a paradigm of building an application that takes a different approach than the traditional code-then-test one, which is test-driven development. But I didn't really understand how much of a difference it really was, until now.
Workflow
What really is test-driven development? Test-driven development, as it's name suggests, is developing an application being driven by tests. What does it actually mean? It implies that our production code should be dictated by what our tests wants us to call, return, or do in a certain class or method. Now what tests classify as a valid one? Unit tests? Functional tests? From what I understand, we can use any form of tests to dictate our production code, as long as they can rightly describe what flows are needed in the object that's being tested.
Some popular terms to implements TDD include red-green-refactor and test-feat-chore. Both flows have the same intentions and apply TDD in them. I'll use red-green-refactor to explain what each component of the flow is used for.
- red: make tests that describe and dictate the flow of the method.
- green: implement the methods to make the tests pass.
- refactor: make the code better by improving the performance, readability, and others.
Conceptually, it's a pretty straightforward process, but practically, there are some challenges.
In Practice
I've tried implementing TDD while working on a mailer service feature from my Software Project course. This mailer service requires another service, which is EmailTokenService
, a service that's responsible for generating and validating token URLs (URLs sent to the user to verify their email or reset their account's password). So basically, there are 3 methods:
validateToken()
createTokenUrl()
setTokenAsInvalid()
, to set the token as invalid if it has been used
And I created a file called email-token.service.ts
:
@Injectable()
export class EmailTokenService {
constructor(@Inject(PG_CONNECTION) private db: DbType) {}
async setTokenAsInvalid(token: string) {}
}
I've added the parameters for the method as well. As we can see, the method body is empty (for now). That's the heart of TDD, making the structure of the class first and implementing them later after creating the tests.
I then set up my tests in email-token.service.spec.ts
:
describe('EmailTokenService', () => {
let service: EmailTokenService;
let db: any;
const mockDb = {
select: jest.fn().mockReturnThis(),
from: jest.fn().mockReturnThis(),
update: jest.fn().mockReturnThis(),
set: jest.fn().mockReturnThis(),
where: jest.fn().mockReturnValue(null),
};
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
EmailTokenService,
{
provide: PG_CONNECTION,
useValue: mockDb,
},
],
}).compile();
service = module.get<EmailTokenService>(EmailTokenService);
db = module.get<DbType>(PG_CONNECTION);
});
describe('setTokenAsInvalid', () => {
});
});
I'm setting up a mock for the database client (I'll be using Drizzle here) and the environment with Test.createTestingModule()
. The code for the test will be put in the describe('setTokenAsInvalid')
to keep the tests structured.
Next, it's time to actually make the test. The key to this is to imagine what should happen in our method. In this case, setTokenAsInvalid()
needs to:
- check if the token is in the correct format (which is UUID in this case)
- if the format is not correct we need to throw some error
- if the format is correct we need to set the token as invalid in the database.
Let's make the test for the negative case first:
it('should throw error if token format is invalid', async () => {
const tokenId = 'some-token';
const setTokenAsInvalid = async () => {
await service.setTokenAsInvalid(tokenId);
};
await expect(setTokenAsInvalid).rejects.toThrow(BadRequestException);
await expect(setTokenAsInvalid).rejects.toThrow(
'Token format is invalid',
);
});
This test will "run" the setTokenAsInvalid()
method and expect an error and error message to be thrown.
Now, let's handle the positive case:
it('should call update on db if setTokenIsInvalid is invoked', async () => {
const tokenId = 'db2053ee-d9fd-45e6-a8cc-838080d48142';
await service.setTokenAsInvalid(tokenId);
expect(db.update).toHaveBeenCalledTimes(1);
});
This test will expect the update function of the Drizzle client to be called (here is the documentation of it). Now here's where I find the test quiet challenging. First, I don't think it's appropriate if we need to setup a small real database for testing to test whether the token's validity status has been updated or not. Secondly, it's pretty tricky to mock the Drizzle client, especially with the builder pattern that the Drizzle client uses. That's why when setting up for the test I made this mock:
const mockDb = {
select: jest.fn().mockReturnThis(),
from: jest.fn().mockReturnThis(),
update: jest.fn().mockReturnThis(),
set: jest.fn().mockReturnThis(),
where: jest.fn().mockReturnValue(null),
};
The mock will mimic the builder pattern nature of the Drizzle client and when using it, I just need to make sure that the update()
method of the real Drizzle client is called.
Now we have made our tests. Let's run it. I'm using Yarn and Jest, so I'll run yarn test email-token.service
, which will run jest email-token.service
:
$ jest email-token.service
FAIL src/email-token/email-token.service.spec.ts
EmailTokenService
setTokenAsInvalid
× should call update on db if setTokenIsInvalid is invoked (14 ms)
× should throw error if token format is invalid (3 ms)
...
Test Suites: 1 failed, 1 total
Tests: 2 failed, 2 total
Snapshots: 0 total
Time: 4.41 s
Ran all test suites matching /email-token.service/i.
We have made 2 tests and both of those tests fail. We do not need to panic, because this is the purpose of the red
or test
component of the red-green-refactor
or test-feat-refactor
workflow.
Now we need to make both of the tests pass:
// email-token.service.ts
...
async setTokenAsInvalid(token: string) {
const uuidRegex =
/^[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$/gi;
const tokenFormatIsInvalid = !uuidRegex.test(token);
if (tokenFormatIsInvalid) {
throw new BadRequestException('Token format is invalid');
}
await this.db
.update(emailTokens)
.set({ isValid: false })
.where(eq(emailTokens.id, token));
}
...
In this method, we check if the token is in the proper format, throw some error if it's not, and update the validity status of the token if it is in the proper format.
Now let's run the tests again.
$ jest email-token.service
PASS src/email-token/email-token.service.spec.ts
EmailTokenService
setTokenAsInvalid
√ should call update on db if setTokenIsInvalid is invoked (13 ms)
√ should throw error if token format is invalid (14 ms)
Test Suites: 1 passed, 1 total
Tests: 2 passed, 2 total
Snapshots: 0 total
Time: 3.858 s, estimated 5 s
Ran all test suites matching /email-token.service/i.
Done in 5.51s.
The tests have passed. We have done our green part. If we run yarn test --coverage
, which will run jest --coverage
, we'll get:
$ jest --coverage
...
------------------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
------------------------------|---------|----------|---------|---------|-------------------
All files | ... | ... | ... | ... |
...
src/email-token | 100 | 100 | 100 | 100 |
email-token.service.ts | 100 | 100 | 100 | 100 |
------------------------------|---------|----------|---------|---------|-------------------
...
Our previous code has been fully covered by the tests. I think this is the benefit of TDD, we don't strive in making test code to get a 100% code coverage, but instead, a 100% code coverage is (most likely) guaranteed for our code. And I think that's pretty cool.
Now we can then refactor our tests, to make our tests cleaner, or refactor our feature code, to make our code cleaner (this is the refactor part). In this case, we can make our types safe by changing the any
type in our test case file:
...
let db: DbType
...
This is so the variable is more type-safe and consistent with the real database type. DbType is PostgresJsDatabase<typeof schema>
.
Discipline and Critical
After working on a feature using TDD, I can feel the discipline required to really accomplish the idea. Reds need to always go before greens. Critically thinking about the flow of our methods beforehand is certainly a challenge. At first, I had a tough time getting used to this workflow (even now I'm still progressing). But as I got my hands to the approach, I appreciate the benefits it offers more. TDD can make our programs more robust and we are "forced" to think of all the scenarios our methods will go through, before creating the implementation. That's pretty cool.
Latest on TDD
Did you know that we can do test-driven development on IoT? The flow would be as follows, as this source suggests:
- Create test cases for the IoT hardware
- Implement the test cases for the IoT hardware
- Refactor the code if needed
The flow is generally similar to a normal TDD flow. In this flow, however, we can mock the hardware components' behaviors so we can foucs on the main flow of the method being tested. I think that this is pretty cool!
Let's say we have this sensor:
class ObjectSensor:
def calculate_distance():
...
class MockObjectSensor:
def calculate_distance():
return 0.2
We can then use this class to simulate the real object sensor in our IoT TDD flow.
This flow is really useful in the IoT context, so that we can create more solid IoT products.