The article this blog post is written about can be found here.

I decided this week to research the Law of Demeter, also known as the principle of least knowledge. I chose this article specifically to write about because it provides a source code example in Java, it gives examples of the sort of code that the Law of Demeter is trying to prevent, and it explains why writing code like that would be a bad idea.

The Law of Demeter, as applied to object-oriented programming, is a design principle consisting of a set of rules about which methods an object should be able to call. The law states that an object should be able to call its own methods, the methods of arguments that are passed to the object as parameters, the methods of objects created locally, the methods of objects that are instance variables, and the methods of objects that are global variables. The general idea behind it is that objects should know as little as possible about the structure or properties of anything besides itself. Put more abstractly, objects should only talk to their immediate friends, not to strangers. Adhering to the Law of Demeter creates classes that are loosely coupled, and it also follows the principle of information hiding.

The sort of code that the Law of Demeter exists to prevent are chains of function calls that look like this:


The article explains that this is a bad idea for several reasons. ObjectA might get its reference to ObjectB removed during refactoring, as ObjectB might get its reference to ObjectC removed. The doSomething() methods in ObjectB or ObjectC might change or get removed. And since your classes are tightly coupled when you write code like this, it will be much harder to reuse an individual class. Following the law will mean your classes will be less affected by changes in other classes, they’ll be easier to test, and they will tend to have fewer errors.

If you want to improve the bad code so it adheres to the Law of Demeter, you need to pass ObjectC to the class containing the original code in order to access its doSomething() method. Alternatively, you can create wrapper methods in your other classes which pass requests onto a delegate. Lots of delegate methods will make your code larger and slower, but it will also be easier to maintain.

Before reading this article, seeing a chain of calls like that probably would have made me instinctively recoil, but I wouldn’t have been able to explain exactly what was wrong. The bad examples given in this article clarified the reasons why code like that is weak. The article also gave me concrete ways I can follow the Law of Demeter in future code.


The blog post this is written about can be found here.

I picked this blog post because because we’ve been utilizing mock objects in class lately, and this post explains in-depth the logic behind using them in addition to succinctly summarizing the different types of mock objects.

Using mock objects focuses a test on the specific code we want to test, eliminating its dependencies on other pieces of code we don’t care about at the moment. This way, if a test fails, we can be sure it’s because of a problem in the code under test and not in something called by it. This greatly simplifies searching for faults and reduces time spent looking for them.

Mock objects also serve to keep the test results consistent, especially when the real object you’re creating a mock of can undergo unpredictable changes. If you utilize a changing database, for instance, your test might pass one time and then fail the next, which gives you no useful information.

Mock objects can also reduce the time necessary to run tests. If code would normally call outside resources, running hundreds of tests which utilize the actual code could take a long while. Mocks of these resources would respond much more quickly. Obviously we want to test calls to the actual resources at some point, but they aren’t necessary in every instance.

“Mock” is also used as a generic term for any kind of imitation object used to replace a real object during testing, of which there are several. Fakes return a predictable result, but the result isn’t based on the logic used to obtain a result in the real object. Stubs return a specific result in response to specific input, but they aren’t equipped to handle other inputs. Stubs can also retain information about how they were called, such as how many times and with what data. Mocks are far more sophisticated versions of stubs which will return values in similar ways, but can also hold expectations about how many times each method should be called, in which order, and with what data. Mocks can ensure that the code we’re testing is using its dependencies the exact way we want it to. Spies replace the methods of the real object a test wants to call instead of acting as a stand-in for the object. Dummies are objects that are passed in place of another object, but never used.

Creating the most sophisticated type of mock seems like it might take more time than it’s worth, but existing mocking frameworks can take care of most of the work of creating mock objects for your tests.

In the future I expect to write tests that utilize mocking. This post, along with Martin Fowler’s article, has given me a good starting point in being able to utilize them effectively as well as decide how elaborate a mock needs to be for a particular test.

The blog post this information is sourced from can be found here.

I chose this blog post because it briefly summed up the general responsibility assignment software patterns, which I wanted to learn about. GRASP are patterns that serve as guidelines to which classes or objects should be assigned responsibilities. There are nine patterns in total.

A general method for deciding which class to assign a responsibility to is to assign it to the information expert: the class that has the necessary information to carry out the responsibility in full.

When trying to decide which object should be responsible for creating new instances of a class, the Creator pattern says that the responsibility of creating Class A instances should be assigned to Class B if Class B contains instances of Class A or gathers instances of Class A into one place, if Class B keeps a record of Class A objects, if Class B is closely associated with Class A objects, or if Class B has all the information necessary to create a Class A object – that is, it’s an information expert about the responsibility of creating Class A objects.

The controller pattern answers the question of what should handle input system events. Controllers can represent the entire system, device, or subsystem, in which case they’re referred to as façade controllers. They can also be use case or session controllers, which handle all system events of a use case. The controller doesn’t itself do the work, but instead delegates it to the appropriate objects and controls the flow of activity.

The low coupling pattern holds that responsibilities should be assigned in such a way that coupling remains as low as possible – that is, there is low dependency between classes, high reuse potential, and changes in one class have a low impact on other classes.

High cohesion means the responsibilities in a class are highly related or focused on a single goal. The amount of work one class does should be limited, and classes shouldn’t do lots of unrelated things.

If behavior varies based on type, polymorphism holds that the responsibility of defining that variation should be assigned to the types for which the variation occurs.

Pure fabrication classes exist to maintain high cohesion when assigning responsibilities based on the information expert pattern would not. They don’t represent something in the problem domain.

Indirection maintains low coupling by assigning mediation responsibilities between two classes to an intermediate class.

The protected variations pattern assigns responsibilities that create a stable interface around points of likely variation or instability to protect other parts of the system from being affected by any variation/instability.

This post gave me ideas I can refer back to when solving the problem of which of several classes should be assigned a particular responsibility. The problems addressed by these design patterns exist in just about every software development project, so there’s no doubt I will find it useful in the future.

The podcast this is written about can be found here.

In the podcast episode I listened to this week, the podcasters seek to define terms like quality, testing, test cases, and integration testing. The general definitions that immediately occur for these concepts can be somewhat abstract, so they further seek to figure out if these questions be answered more concretely or only philosophically.

They approach the concept of quality from several different angles. They agree that quality doesn’t simply mean bug-free, and instead define it as the extent to which the user of a thing views it as helpful or useful. Things that contribute to quality might be that it’s easy to operate, it feels good to use it, and the user knows what it can and can’t do and is satisfied with that. They argue that if one piece of software has a lot of known bugs but is used frequently by thousands of people, and another piece of software is bug-free but only used by ten people, the first one is actually higher quality. They use the example of a chair to illustrate their definition of quality. A chair without a seat would obviously be low quality, since you expect to be able to sit on a chair. A different chair with a seat but with one of the legs three inches too short would be low quality in a different way; in some contexts sitting on it would work out fine, but if you lean in the wrong direction you might fall onto the floor. We’d expect a higher-quality chair not to “crash” like that. They also agree that quality is subjective and based in part on what the individual desires out of something.

The podcasters arrived at a more concrete definition of a test case a lot quicker. They defined it as something you want to try, a result you want to verify, and a judgment call made about that result. As an example, they use posting a message to Slack and verifying that it shows up. One of the podcasters argues that it would still count as a single test case if you verified it on several different OSes. They say that the judgment call is the most difficult part of testing, and it’s based on your own definition of quality.

They then try to define testing as a whole. They decide a large part of it is those judgment calls, but also generating new ideas for things to try and learning about the software.

Finally, they define integration testing broadly as making sure all the pieces fit together, and more specifically as the tests you write to make sure other people don’t break your code. The verification would be that someone else’s component returns the expected results that your component is counting on.

These podcasts continue to build a picture for me of the current state of the industry and help me think outside the box about testing terminology and testers’ role.

The resource I discovered this week is partially a blog post, but mostly the mobile app that the blog post is about. The app is called Enki. It’s similar to the language-learning app Duolingo, but its purpose is to help people learn new software development skills in small chunks every day. The blog post explains why the app was developed: the options that currently exist for learning software development skills on an ongoing basis (mostly books and video courses) take a lot of time, something developers tend to be short on. They can also be boring and inefficient. The app creators wanted something fun, engaging, useful, and quick. The mobile platform was selected so that users would always be able to have it with them, and therefore be able to squeeze learning into limited free time like a work commute or the time it takes for their code to compile. The daily lessons, called workouts, stick to small tips and bits of applicable information instead of getting bogged down in details that people who’ve passed the beginner stage probably already know. They’re also designed specifically with avoiding boredom in mind, so the workouts contain engaging challenges and game-like elements.

I chose this resource because it’s one of the only learning tools of its type I’ve seen, and it’s something I’m actually interested in using. Tools like Codecademy exist, but they’re aimed at getting new people interested rather than facilitating ongoing learning.

Reading about this app affected me immediately, because I downloaded and started using it. The app lets you select which topics you’d like to receive daily workouts about, and what level of skill you have in that area. It then prompts you to do at least one workout a day. I selected beginner lessons for web development and Javascript, and a “familiar” level of skill for Java. The workouts are designed to take about five minutes, and if you leave in the middle of one and then come back, it will tell you how many remaining minutes are required to complete the lesson. Seeing such small amounts of time encourages you to finish. After you complete a workout, you unlock a game related to the topic. It’s like a combination of Tetris and rapid-fire trivia; you need to sort pieces of information into one of two categories, and answering wrong causes the pieces to start stacking up. You lose when the stack reaches the top of the screen.

So far I’ve only brushed up on HTML skills, but the app seems interesting, and I intend to keep using it. It seems like a fun way to learn new things or to get a refresher.

The blog post referenced in this post can be found here.

The blog post this is written about can be found here.

I’ve been hearing words like waterfall and agile a lot in the course of researching software development and testing for my classes, so this week I tracked down a simple blog post explaining the difference in the two development methods. The descriptions of the two lined up with the two sides pitted against each other in the time-travel argument I wrote about for the other class.

The earlier method, waterfall, is a sequential scheme in which development is split into eight stages and one stage of development follows the other with no overlap. This is a technique I’d actually heard explained unattached to the name waterfall prior to this year. In other resources, it seems to be mostly referred to in terms of its disadvantages. This post lists some of the advantages of the method. There’s no room for errors or modification when you can’t go back to the previous step without starting the whole process over again. As a consequence, extensive planning and documentation is a requirement. The waterfall methodology can to some extent ensure a very clear picture of the final product, and the documentation serves as a resource for making improvements in the future.

However, there are significant downsides that led to the creation of the agile methodology. The dependence on initial requirements means that if the requirements are incomplete or in error, the resulting software will be too. If the problems with the requirements are discovered in the middle of development, the developers will have to start over. All testing is pushed to the end, which means that if bugs were created early, they could have had an impact on code written later. The whole thing is a recipe for the project taking a very long time.

In contrast, developers using the agile methodology start with a simple design and then begin working on small modules for set intervals of time called sprints. After every sprint, testing is done and priorities are reexamined. Bugs are discovered and fixed quicker in this way, and the method is highly adaptable to changing requirements. This approach tends to be much faster and is favored in modern development. It allows for adaptation to rapid changes in industry standards, the quick release of a working piece of software, and the ability for a client to give feedback and see immediate changes. The lack of a definitive plan at the beginning can be a drawback.

Having a clear picture of both of these methodologies provides useful context that will enable me to follow more in-depth discussions of software development, and there’s a good chance it will be relevant to my future career.

The article referenced in this blog post can be found here.

This past week I found an article which put forward an unconventional idea: unit testing smells. I picked this article because applying the concept of code smells to test code was intriguing to me. The idea is that certain things that can happen in the course of writing and running your test code can inform you that something is not quite right with your production code. They aren’t bugs or test failures, but, like all code smells, indicators of poor design which could lead to difficulties down the line.

Firstly, the author suggests that having a very difficult time writing tests could signify that you haven’t written testable code. He explains that most of the time, it’s an indicator of high coupling. This can be a problem with novice testers especially, as they’ll often assume the problem is with them rather than the code they’re attempting to write tests for.

If you’re writing tests well enough but doing elaborately difficult things to get at the code you’re trying to test, it’s another testing smell. The author writes that this is likely the result of writing an iceberg class, which was a new term for me. Essentially, too much is encapsulated in one class, which leads to the necessity of mechanisms like reflection schemes to get to internal methods you’re trying to test. Instead, these methods should probably be public in a separate class.

Tests taking a long time to run is a smell. It could mean that you’re doing something other than unit testing, like accessing a database or writing a file, or you could have found an inefficient part of the production code that needs to be optimized.

A particularly insidious test smell is intermittent test failure. This test passes over and over again, but every once in a while, it will fail when given the exact same input as always. This tells you nothing definitive about what’s going on, which is a real problem when you’re performing tests specifically to get a definitive answer about whether your code is working as intended. If you generate a random number somewhere in the production code, it could be that the test is failing for some specific number. It could be a problem with the test you wrote. It could be that you don’t actually understand the behavior of the production code. This kind of smell is a hassle to address, but it’s absolutely crucial to figure out.

Having read this, I won’t just pay attention to whether my tests yield the expected results, but will look out for these signs of design flaws in the code being tested.