A command object is a small object that represents a state-changing action that
can happen in the system. Commands have no behaviour, they’re pure data
structures. There’s no reason why you have to represent them with classes, since
@@ -209,7 +208,6 @@
Introducing Command Handler
self.issue_log.add(issue)
-
Command handlers are stateless objects that orchestrate the behaviour of a
system. They are a kind of glue code, and manage the boring work of fetching and
saving objects, and then notifying other parts of the system. In keeping with
@@ -240,7 +238,6 @@
Introducing Command Handler
issue.mark_as_resolved(cmd.resolution)
-
This handler violates our glue-code principle because it encodes a business
rule: “If an issue is already resolved, then it can’t be resolved a second
time”. This rule belongs in our domain model, probably in the mark_as_resolved
@@ -258,7 +255,6 @@
Introducing Command Handler
issue_log.add(issue)
-
If magic methods make you feel queasy, you can define a handler to be a class
that exposes a handle method like this:
classReportIssueHandler:
@@ -266,7 +262,6 @@
Introducing Command Handler
...
-
However you structure them, the important ideas of commands and handlers are:
Commands are logic-free data structures with a name and a bunch of values.
There’s not a lot of functionality here, and our issue log has a couple of
problems, firstly there’s no way to see the issues in the log yet, and secondly
we’ll lose all of our data every time we restart the process. We’ll fix the
diff --git a/blog/2017-09-08-repository-and-unit-of-work-pattern-in-python.html b/blog/2017-09-08-repository-and-unit-of-work-pattern-in-python.html
index 36412cd..64815c9 100644
--- a/blog/2017-09-08-repository-and-unit-of-work-pattern-in-python.html
+++ b/blog/2017-09-08-repository-and-unit-of-work-pattern-in-python.html
@@ -74,7 +74,6 @@
Repository and Unit of Work Pattern
issue_log.add(issue)
-
The IssueLog is a term from our conversation with the domain expert. It’s the
place that they record the list of all issues. This is part of the jargon used
by our customers, and so it clearly belongs in the domain, but it’s also the
@@ -106,7 +105,6 @@
Repository and Unit of Work Pattern
self.output.write(1)
-
Because we had the great foresight to use standardised ports, we can plug any
number of different devices into our circuit. For example, we could attach a
light-detector to the input and a buzzer to the output, or we could attach a
@@ -133,7 +131,6 @@
Repository and Unit of Work Pattern
self.on=False
-
Considered in isolation, this is just an example of good OO practice: we are
extending our system through composition. What makes this a ports-and-adapters
architecture is the idea that there is an internal world consisting of the
@@ -161,7 +158,6 @@
Repository and Unit of Work Pattern
json.dump(f)
-
By analogy to our circuit example, the IssueLog is a WriteablePort - it’s a way
for us to get data out of the system. SqlAlchemy and the file system are two
types of adapter that we can plug in, just like the Buzzer or Light classes. In
@@ -186,7 +182,6 @@
Repository and Unit of Work Pattern
filter(foo.latitude==latitude)
-
We expose a few methods, one to add new items, one to get items by their id, and
a third to find items by some criterion. This FooRepository is using a
SqlAlchemy session
@@ -209,7 +204,6 @@
Repository and Unit of Work Pattern
ifitem.latitude==latitude)
-
This adapter works just the same as the one backed by a real database, but does
so without any external state. This allows us to test our code without resorting
to Setup/Teardown scripts on our database, or monkey patching our ORM to return
@@ -226,7 +220,7 @@
Repository and Unit of Work Pattern
request.
What does a unit of work look like?
classSqlAlchemyUnitOfWorkManager(UnitOfWorkManager):
- """The Unit of work manager returns a new unit of work.
+"""The Unit of work manager returns a new unit of work. Our UOW is backed by a sql alchemy session whose lifetime can be scoped to a web request, or a long-lived background job."""
@@ -238,7 +232,7 @@
Repository and Unit of Work Pattern
classSqlAlchemyUnitOfWork(UnitOfWork):
- """The unit of work captures the idea of a set of things that
+"""The unit of work captures the idea of a set of things that need to happen together. Usually, in a relational database,
@@ -267,7 +261,6 @@
Repository and Unit of Work Pattern
returnIssueRepository(self.session)
-
This code is taken from a current production system - the code to implement
these patterns really isn’t complex. The only thing missing here is some logging
and error handling in the commit method. Our unit-of-work manager creates a new
@@ -287,7 +280,6 @@
Repository and Unit of Work Pattern
unit_of_work.commit()
-
Our command handler looks more or less the same, except that it’s now
responsible for starting a unit-of-work, and committing the unit-of-work when it
has finished. This is in keeping with our rule #1 - we will clearly define the
@@ -341,7 +333,6 @@
Repository and Unit of Work Pattern
expect(self.uow.was_committed).to(be_true)
-
Next time [https://io.made.com/blog/commands-and-queries-handlers-and-views]
we’ll look at how to get data back out of the system.
In this class, the is_on method is referentially transparent - I can replace it
with the value True or False without any loss of functionality, but the method
toggle_light is side-effectual: replacing its calls with a static value would
@@ -116,7 +115,6 @@
What is CQS ?
returnjson.dumps(open_issues)
-
This is totally fine unless you have complex formatting, or multiple entrypoints
to your system. The problem with using your repositories directly in this way is
that it’s a slippery slope. Sooner or later you’re going to have a tight
@@ -130,7 +128,6 @@
What is CQS ?
uow.commit()
-
Super convenient, but then you need to add some error handling and some logging
and an email notification.
Aaaaand, we’re back to where we started: business logic mixed with glue code,
and the whole mess slowly congealing in our web controllers. Of course, the
slippery slope argument isn’t a good reason not to do something, so if your
@@ -185,7 +181,6 @@
What is CQS ?
returnjsonify(view_builder.fetch())
-
This is my favourite part of teaching ports and adapters to junior programmers,
because the conversation inevitably goes like this:
@@ -225,7 +220,6 @@
Why have a separate read-model?
assignee.queues.inbox.add(task)
-
ORMs make it very easy to “dot” through the object model this way, and pretend
that we have our data in memory, but this quickly leads to performance issues
when the ORM generates hundreds of select statements in response. Then they get
@@ -310,7 +304,6 @@
There’s a few ways to do this, the most common is just to use a UUID, but you
can also implement something like
hi-lo.
diff --git a/blog/2017-09-19-why-use-domain-events.html b/blog/2017-09-19-why-use-domain-events.html
index 8403615..6936a45 100644
--- a/blog/2017-09-19-why-use-domain-events.html
+++ b/blog/2017-09-19-why-use-domain-events.html
@@ -145,7 +145,6 @@
We’re introducing a new concept - Issues now have a state, and a newly reported
issue begins in the AwaitingTriage state. We can quickly add a command and
handler that allows us to triage an issue.
@@ -161,7 +160,6 @@
Mapping our requirements to our domain
uow.commit()
-
Triaging an issue, for now, is a matter of selecting a category and priority.
We’ll use a free string for category, and an enumeration for Priority. Once an
issue is triaged, it enters the AwaitingAssignment state. At some point we’ll
@@ -180,7 +178,6 @@
Mapping our requirements to our domain
uow.commit()
-
At this point, the handlers are becoming a little boring. As I said way back in
the first part [https://io.made.com/blog/introducing-command-handler/], commands
handlers are supposed to be boring glue-code, and every command handler has the
@@ -225,7 +222,6 @@
Mapping our requirements to our domain
self.email_sender.send(email)
-
Something here feels wrong, right? Our command-handler now has two very distinct
responsibilities. Back at the beginning of this series we said we would stick
with three principles:
@@ -290,7 +286,6 @@
Mapping our requirements to our domain
self.email_sender.send(email)
-
We don’t really need a unit of work here, because we’re not making any
persistent changes to the Issue state, so what if we use a view builder instead?
That seems better, but how should we invoke our new handler? Building a new
command and handler from inside our AssignIssueHandler also sounds like a
violation of SRP. Worse still, if we start calling handlers from handlers, we’ll
@@ -360,12 +354,12 @@
Mapping our requirements to our domain
classMessageBus:def__init__(self):
- """Our message bus is just a mapping from message type
+"""Our message bus is just a mapping from message type to a list of handlers"""self.subscribers=defaultdict(list)defhandle(self,msg):
- """The handle method invokes each handler in turn
+"""The handle method invokes each handler in turn with our event"""msg_name=type(msg).__name__subscribers=self.subscribers[msg_name]
@@ -373,7 +367,7 @@
Mapping our requirements to our domain
subscriber.handle(cmd)defsubscribe_to(self,msg,handler):
- """Subscribe sets up a new mapping, we make sure not
+"""Subscribe sets up a new mapping, we make sure not to allow more than one handler for a command"""subscribers=[msg.__name__]ifmsg.is_cmdandlen(subscribers)>0:
@@ -386,7 +380,6 @@
Mapping our requirements to our domain
bus.handle(cmd)
-
Here we have a bare-bones implementation of a message bus. It doesn’t do
anything fancy, but it will do the job for now. In a production system, the
message bus is an excellent place to put cross-cutting concerns; for example, we
@@ -404,7 +397,6 @@
Not much has changed here - we’re still building our command in the Flask
adapter, but now we’re passing it into a bus instead of directly constructing a
handler for ourselves. What about when we need to raise an event? We’ve got
@@ -425,7 +417,6 @@
Mapping our requirements to our domain
cmd.assigned_by))
-
I usually think of this event-raising as a kind of glue - it’s orchestration
code. Raising events from your handlers this way makes the flow of messages
explicit - you don’t have to look anywhere else in the system to understand
@@ -445,7 +436,6 @@
There’s a couple of benefits of doing this: firstly, it keeps our command
handler simpler, but secondly it pushes the logic for deciding when to send an
event into the model. For example, maybe we don’t always need to raise the
@@ -461,7 +451,6 @@
Now we’ll only raise our event if the issue was assigned by another engineer.
Cases like this are more like business logic than glue code, so today I’m
choosing to put them in my domain model. Updating our unit tests is trivial,
@@ -485,7 +474,6 @@
Mapping our requirements to our domain
self.assigned_by)))
-
The have_raised function is a custom matcher I wrote that checks the events
attribute of our object to see if we raised the correct event. It’s easy to test
for the presence of events, because they’re namedtuples, and have value
@@ -540,7 +528,6 @@
Mapping our requirements to our domain
self.publish_events()
-
Okay, we’ve covered a lot of ground here. We’ve discussed why you might want to
use domain events, how a message bus actually works in practice, and how we can
get events out of our domain and into our subscribers. The newest code sample
diff --git a/blog/2019-04-15-inversion-of-control.html b/blog/2019-04-15-inversion-of-control.html
index 088e89c..cba8af9 100644
--- a/blog/2019-04-15-inversion-of-control.html
+++ b/blog/2019-04-15-inversion-of-control.html
@@ -111,7 +111,7 @@
Removing cycles by inverting control
There are a few ways to tackle a circular dependency. You may be able to extract a shared dependency into a separate
module, that the other two modules depend on. You may be able to create an extra module that coordinates the two modules,
instead of them calling each other. Or you can use inversion of control.
-
At the moment, each module calls each other. We can pick one of the calls (let’s say A‘s call to B) and invert
+
At the moment, each module calls each other. We can pick one of the calls (let’s say A’s call to B) and invert
control so that A no longer needs to know anything about B. Instead, it exposes a way of plugging into its
behaviour, that B can then exploit. This can be diagrammed like so:
@@ -140,12 +140,10 @@
Conclusion: complex is better than complicated
Simple is better than complex.
-
But also that
Complex is better than complicated.
-
I think of inversion of control as an example of choosing the complex over the complicated. If we don’t use it when
it’s needed, our efforts to create a simple system will tangle into complications. Inverting dependencies allows us,
at the cost of a small amount of complexity, to make our systems less complicated.
Abstractions, implementations and interfaces — in Python
print("Woof.")
-
In this example, Animal is an abstraction: it declares its speak method, but it’s not intended to be run (as
is signalled by the NotImplementedError).
Cat and Dog, however, are implementations: they both implement the speak method, each in their own way.
@@ -100,7 +99,6 @@
Polymorphism and duck typing
make_animal_speak(Dog())
-
The make_animal_speak function need not know anything about cats or dogs; all it has to know is how to interact
with the abstract concept of an animal. Interacting with objects without knowing
their specific type, only their interface, is known as ‘polymorphism’.
@@ -115,7 +113,6 @@
Polymorphism and duck typing
print("Woof.")
-
Even if Cat and Dog don’t inherit Animal, they can still be passed to make_animal_speak and things
will work just fine. This informal ability to interact with an object without it explicitly declaring an interface
is known as ‘duck typing’.
@@ -132,7 +129,6 @@
Polymorphism and duck typing
notify(customer,event)
-
We may even use Python modules:
importemailimporttext_message
@@ -142,7 +138,6 @@
Polymorphism and duck typing
notification_method.notify(customer,event)
-
Whether a shared interface is manifested in a formal, object oriented manner, or more implicitly, we can
generalise the separation between the interface and the implementation like so:
@@ -167,7 +162,6 @@
Technique One: Dependency Injection
print("Hello, world.")
-
This function is called from a top level function like so:
# main.py
@@ -178,7 +172,6 @@
Technique One: Dependency Injection
hello_world()
-
hello_world has one dependency that is of interest to us: the built in function print. We can draw a diagram
of these dependencies like this:
@@ -192,7 +185,6 @@
Technique One: Dependency Injection
output_function("Hello, world.")
-
All we do is allow it to receive the output function as an argument. The orchestration code then passes in the print function via the argument:
# main.py
@@ -203,7 +195,6 @@
Technique One: Dependency Injection
hello_world.hello_world(output_function=print)
-
That’s it. It couldn’t get much simpler, could it? In this example, we’re injecting a callable, but other
implementations could expect a class, an instance or even a module.
With very little code, we have moved the dependency out of hello_world, into the top level function:
@@ -229,7 +220,6 @@
The Configuration Registry
output_function("Hello, world.")
-
To complete the picture, here’s how it could be configured externally:
# main.py
@@ -243,7 +233,6 @@
The Configuration Registry
hello_world.hello_world()
-
The machinery in this case is simply a dictionary that is written to from outside the module. In a real world system,
we might want a slightly more sophisticated config system (making it immutable for example, is a good idea). But at heart,
any key-value store will do.
@@ -266,7 +255,6 @@
The Subscriber Registry
print(f"Hello, {person}.")
-
# john.pyimporthello_people
@@ -275,7 +263,6 @@
The Subscriber Registry
hello_people.people.append("John")
-
# martha.pyimporthello_people
@@ -284,7 +271,6 @@
The Subscriber Registry
hello_people.people.append("Martha")
-
As with the configuration registry, there is a store that can be written to from outside. But instead of
being a dictionary, it’s a list. This list is populated, typically
at startup, by other components scattered throughout the system. When the time is right,
@@ -309,7 +295,6 @@
Subscribing to events
subscriber()
-
# log.pyimporthello_world
@@ -322,7 +307,6 @@
Subscribing to events
hello_world.subscribers.append(write_to_log)
-
Technique Three: Monkey Patching
Our final technique, Monkey Patching, is very different to the others, as it doesn’t use the Inversion of Control
pattern described above.
@@ -341,7 +325,6 @@
Technique Three: Monkey Patching
hello_world.hello_world()
-
Monkey patching takes other forms. You could manipulate to your heart’s content some hapless class defined elsewhere
— changing attributes, swapping in other methods, and generally doing whatever you like to it.
# knows how to save itself to the DB. like Django.
-
We want to sync our shipments model with a third party, the cargo freight
company, via their API. We have a couple of use cases: creating new shipments,
and checking for updated etas.
@@ -107,7 +106,6 @@
Writing tests for external API calls
sync_to_api(shipment)
-
How do we sync to the API? A simple POST request, with a bit of datatype
conversion and wrangling.
defsync_to_api(shipment):
@@ -121,7 +119,6 @@
Writing tests for external API calls
})
-
Not too bad!
How do we test it? In a case like this, the typical reaction is to reach for mocks,
and as long as things stay simple, it’s pretty manageable
@@ -138,7 +135,6 @@
Writing tests for external API calls
)
-
And you can imagine adding a few more tests, perhaps one that checks that we do
the date-to-isoformat conversion correctly, maybe one that checks we can handle
multiple lines. Three tests, one mock each, we’re ok.
@@ -179,7 +175,6 @@
Writing tests for external API calls
)
-
And as usual, complexity creeps in:
@@ -212,7 +207,6 @@
Writing tests for external API calls
)
-
…and our tests are getting less and less pleasant. Again, the details don’t
matter too much, the hope is that this sort of test ugliness is familiar.
And this is only the beginning, we’ve shown an API integration that only cares
@@ -242,7 +236,6 @@
Writing tests for external API calls
shipment.save()
-
I haven’t coded up what all the tests would look like, but you could imagine them:
a test that if the shipment does not exist, we log a warning. Needs to mock requests.get or get_shipment_id()
@@ -321,7 +314,6 @@
SUGGESTION: Build an Adapter (a wrapper for the external API)
SUGGESTION: Build an Adapter (a wrapper for the external API)
)
-
SUGGESTION: Use (only?) integration tests to test your Adapter
Now we can test our adapter separately from our main application code, we
can have a think about what the best way to test it is. Since it’s just
@@ -396,7 +386,6 @@
SUGGESTION: Use (only?) integration tests to test your Adapter
That relies on your third-party api having a decent sandbox that you can test against.
You’ll need to think about:
@@ -533,7 +522,6 @@
OPTION: Build your own fake for integration tests
return'ok',200
-
This doesn’t mean you never test against the third-party API, but
you’ve now given yourself the option not to.
@@ -603,7 +591,6 @@
OPTION: DI
...
-
Now we can add our explicit dependency where it’s needed, replacing
a hardcoded import with a new, explicit argument to a function somewhere.
Possibly event with a type hint:
@@ -616,7 +603,6 @@
OPTION: DI
# rest of controller code essentially unchanged.
-
What effect does that have on our tests? Well, instead of needing to
call with mock.patch(), we can create a standalone mock, and pass it
in:
Each time you add a layer, you buy yourself some decoupling, but it
comes at the cost of an extra moving part. In the simplest terms, there’s an
extra file you have to maintain.
- Here’s a recap of all the layers + parts of our architecture
+ Here's a recap of all the layers + parts of our architecture
-
+
So. Once upon a time, early in my time at MADE, I remember having to make a
simple change to an app that the buying team uses. We needed to record an extra
piece of information for each shipment, an optional “delay” field to be used in
diff --git a/blog/2020-10-27-i-hate-enums.html b/blog/2020-10-27-i-hate-enums.html
index 606bda6..0097d44 100644
--- a/blog/2020-10-27-i-hate-enums.html
+++ b/blog/2020-10-27-i-hate-enums.html
@@ -66,7 +66,6 @@
Making Enums (as always, arguably) more Pythonic
GALAXY='galaxy'
-
What could be wrong with that, I hear you ask?
Well, accuse me of wanting to stringly type everything if you will,
but: those enums may look like strings but they aren’t!
@@ -80,7 +79,6 @@
Making Enums (as always, arguably) more Pythonic
# finally, yes.
-
I imagine some people think this is a feature rather than a bug? But for me
it’s an endless source of annoyance. They look like strings! I defined them
as strings! Why don’t they behave like strings arg!
assertrandom.choice(BRAIN)in['small','medium','galaxy']# Raises an Exception!!!
@@ -111,13 +108,11 @@
Making Enums (as always, arguably) more Pythonic
KeyError:2
-
I have no idea what’s going on there. What we actually wanted was
assertrandom.choice(list(BRAIN))in['small','medium','galaxy']# which is still not true, but at least it doesn't raise an exception
-
Now the standard library does provide a solution
if you want to duck-type your enums to integers,
IntEnum
@@ -136,7 +131,6 @@
Making Enums (as always, arguably) more Pythonic
assertrandom.choice(list(IBRAIN))in[1,2,3]# this is ok
-
That’s all fine and good, but I don’t want to use integers.
I want to use strings, because then when I look in my database,
or in printouts, or wherever,
@@ -160,7 +154,6 @@
Making Enums (as always, arguably) more Pythonic
# so, while BRAIN.SMALL == 'small', str(BRAIN.SMALL) != 'small' aaaargh
-