At university, I recently followed an OOP programming course. A big part of the course was using design patterns to architect features and applications. As a functional programmer, I was (and am still) horrified/puzzled by how OOP languages encourage the use seemingly arbitrary design patterns.

But it made me think: Most Elm programmers are javascript programmers (welcome!) to whom a functional approach to developing is foreign. The Elm Architecture solves many global architecture problems, but

for business logic, there isn’t much learning material available at the moment. As a community, we should do a better job in this area. This post describes how the concept of mathematical structures can help design programs.

## The structure of structures

Rather than overarching design patterns, functional programs are based on composing small pieces: Bottom-up rather than top-down. So, we want patterns that lead to building blocks that compose nicely. The mathematical structure is one such pattern.

The structure captures something very fundamental: It is **an association between data and functions**, the two atomic building blocks of (functional) programming. This idea matters much more than the details of discussed structures: Functional programs are structured in terms of relations between data and functions. It is a simple idea, but non-functional languages often make it complicated (objects, side-effects, design patterns, ect.).

Concretely, a structure viewed through a programming lens is a combination of:

- A
**Type**, possibly with**special values**of this type - one or more
**Functions** **Laws or properties**of the behavior of the functions

Or more succinctly: a tuple `(Type, Function)`

- possibly extended with more functions or values - with laws governing how the function behaves.

### An example

The type **Int** with the operation `(+)`

. We can observe that the order in which a sum is bracketed does not matter: a sum is associative.

`a + (b + c) == (a + b) + c `

Additionally, adding 0 on either side doesn’t change the result: 0 is the identity under addition.

`x + 0 == x == 0 + x `

In Elm code

```
type alias MyPattern = { plus : Int -> Int -> Int, identity : Int }
myInstance = { plus = (+), identity = 0 }
```

Sadly, it’s not possible to express the laws in a way the compiler understands, or to let the compiler verify these laws. Therefore in programming, **the laws are assumptions**: we assume that the instance creator created a correct, law-abiding instance.

The formalization of a structure as a record is also somewhat flawed in Elm, currently. Lack of higher-kinded types and typeclasses (more advanced FP features) means that you gain very little by declaring a type alias. Some structures are impossible to express in current Elm. Still, it helps to think of instances as records of values and functions, so I’ll use them in this post.

## Generalizing types

When a pattern has just one instance, it isn’t much of a pattern. We have to let it work on more `(Type, Function)`

pairs. A simple generalization is to adjust the type from `Int`

to `number`

:

```
type alias MyPattern number =
{ plus : number -> number -> number
, identity : number
}
myInstance = { plus = (+), identity = 0 }
```

Continuing that pattern, `a`

can be substituted for `number`

. Then, to create an instance, one has to find a type with an associative operation that has an identity element. This pattern is very common and has a name: the **monoid**.

Here are some instances:

```
type alias Monoid a = { append : a -> a -> a, identity : a }
and : Monoid Bool
and = { append = (&&), identity = True }
or : Monoid Bool
or = { append = (||), identity = False }
list : Monoid (List a)
list = { append = (++), identity = [] }
```

The monoid is a useful pattern because it has a decent number of instances and common functions can be defined for monoids in general, for instance `mconcat`

:

```
-- mconcat for "monoidal concatenation"
mconcat : Monoid a -> List a -> a
mconcat monoid list = List.foldr monoid.append monoid.identity list
plus : Monoid number
plus = Monoid { append = (+), identity = 0 }
sum : List number -> number
sum = mconcat plus
concat : List (List a) -> List a
concat = mconcat { append = (++), identity = [] }
```

# Varying assumptions

By slightly modifying the assumptions, we get structures with different behavior.

## Removing assumptions

By removing assumptions, more `(Type, Function)`

combinations will satisfy the remaining assumptions. This means the weaker pattern is more general.

When the identity element is removed from a monoid, a structure called semigroup remains:

```
type alias Semigroup a = { append : a -> a -> a }
-- Basics.min : comparable -> comparable -> comparable
minimal : Semigroup comparable
minimal = { append = Basics.min }
```

Because monoids have an append operation, every monoid is automatically also a semigroup. An example of a semigroup that cannot be a monoid is the minimal value. There is no valid identity to make a monoid.

If you have custom types in your projects, try to see whether you can form instances yourself.

For instance, I found that bounding boxes, for which I recently made a package, form a semigroup:

```
type alias Vec2 = { x : Float, y : Float }
type BoundingBox = BoundingBox Vec2 Vec2
union : BoundingBox -> BoundingBox -> BoundingBox
union (BoundingBox lower1 upper1) (BoundingBox lower2 upper2) =
BoundingBox
{ x = min lower1.x lower2.x, y = min lower1.y lower2.y }
{ x = max upper1.x upper2.x, y = max upper1.y upper2.y }
semigroupBBox : Semigroup BoundingBox
semigroupBBox = { append = union }
```

## Adding assumptions

The append operation of a monoid is associative. It can be strengthened by adding more laws. For instance commutativity

`append x y = append y x`

Adding the commutativity assumption gives a commutative monoid. Because of their properties, commutative monoids are very useful in parallel computing.

For example, summing the list `[ 1, 2, 3, 4, 4, 5, 6 ]`

on three cores: Because of associativity

`sum [ 1, 2, 3, 4, 5, 6 ] = sum [ 1, 2 ] + sum [ 3, 4 ] + sum [ 5, 6 ]`

Now each core can do an individual sum and then these per-core results are added. But, the cores might not finish in-order. Luckily, because of commutativity, the order of calculating the final answer doesn’t matter. The commutative monoid has exactly strict enough assumptions to make distribution of work over cores painless.

## The applicability versus usefulness optimization problem

Structures are useful when they have a decent number of interesting instances. A decent number is needed to make learning about a structure worthwhile, and with interesting I mean “solves real problems”.

Having weak assumptions gives many instances, but may not allow meaningful generalization (free code, like `mconcat`

). Conversely, strong laws give a lot of opportunity of generalization, but there may be no interesting instances to apply the generalizations to. So, we want to find a sweet spot.

**Magma**(any type`a`

with a function`a -> a -> a`

) is too general to be useful- The
**semigroup**has many instances, a couple are interesting **monoid**is probably the sweet spot**commutative monoid**is more useful but has fewer instances- the
**group**(even stronger assumptions, see postscript) has very few instance and virtually no use in programming (currently).

# Conclusion

When designing modules — specifically constructing types for business logic, like the bounding box — it can be very helpful to think about what properties and structures the type has or should have. Often, more complex features follow logically from the building blocks that structures provide.

Structures give free code. The monoid gives `mconcat`

for free. More advanced structures, like the (in)famous monad, allow for many more free functions. Structures are also abstractions: When a type is a monoid, it exposes an api (`identity`

and `append`

). The internal implementation often doesn’t really matter.

Thinking in terms of structures made me view my code differently, looking for interesting relationships at all granularities. Next time when you design a piece of code, try and see whether you can apply this idea.

## Postscript: Adding or modifying functions

We can choose to add a function to a structure, or start out with a completely new one.

### Inverses give groups

By adding an inverse function such that

```
append x (invert x) = identity
append (invert x) x = identity
-- example
(+) 5 -5 = 0 -- holds
(&&) True False = False -- doesn't hold
```

a structure called **group** is created. These are not very useful in programming (although uses may be found someday), but they are in mathematics. See also Group Theory.

### Different base functions

By choosing a function different from `append : a -> a -> a`

we get completely different structures. An interesting choice is `map : (a -> b) -> f a -> f b`

, which for lists gives `(List, (a -> b) -> List a -> List b)`

. To create the instance, we use `List.map`

which has the correct type signature. The same structure can be defined for `Maybe`

with `Maybe.map`

or `Decoder`

with `Decoder.map`

. These names are no coincidence: all these types and their map functions are instances of a structure called **functor**. There is one accompanying law:

```
map identity = identity
-- checking the law for Maybe, using the definition of Maybe.map
Maybe.map identity (Just 5) = Just (identity 5) = Just 5 = identity (Just 5)
Maybe.map identity Nothing = Nothing = identity Nothing
map : (a -> b) -> Maybe a -> Maybe b
map f m =
case m of
Just v ->
Just (f v)
Nothing ->
Nothing
```