GraphQL Clients

In my last post (1), we set up a GraphQL server with Hot Chocolate; in those post, I will show how we can call this server from various clients. First, we make calls from a C# app with Strawberry Shake, a client provided by the Hot Chocolate team; then we will make calls from a React web app with the two popular GraphQL clients Apollo and Relay. When using these clients it is important to remember that they are just making HTTPS (or whatever other transport you decide to use) calls behind the scene; these clients are simply wrappers that provide extra tooling to make your life easier. The server we are building against can be found on GitHub (2).

Strawberry Shake

Writing a Strawberry Shake client against .NET 5+ is very easy; a completed demo can be found at (3). First, we need to install the Strawberry Shake dotnet tools by running dotnet tool install StrawberryShake.Tools --local on the command line. Now we will create our client with dotnet graphql init https://localhost:44377/graphql/ -n LibraryClient -p ./Client Add a namespace property in the created .graphqlrc.json file under the extensions:strawberryShake alongside the url property; this is the namespace the generated client will be placed under. See (4) for more options.

Next, we need to define some queries and mutations for our app to run; place these in various .graphql files inside the newly-created Client folder. When the project is built, these files will be found and compiled into your library using source generators; there will also be a number of files created under a Generated folder to support your editor experience.

fragment BooksPage on AllBooksConnection {
  nodes {
    id
    isbn
    name
  }
}

query App {
  allBooks {
    ...BooksPage
  }
}

mutation CreateUser($username: String!) {
  createUser(name: $username) {
    id name
  }
}

Now that we have our client, we can tie into it through DI. Here is an example of using it with a console app; if it were an ASP.NET Core or Blazor website, you would move the IoC configuration to the ConfigureServices method and inject the ILibraryClient wherever you needed to use it.

static async Task Main(string[] args)
{
    var serviceCollection = new ServiceCollection();

    serviceCollection.AddScoped(sp => new HttpClient { BaseAddress = new Uri("https://localhost:44377") });

    serviceCollection
        .AddLibraryClient()
        .ConfigureHttpClient(client => client.BaseAddress = new Uri("https://localhost:44377/graphql"));

    var serviceProvider = serviceCollection.BuildServiceProvider();
    var client = serviceProvider.GetRequiredService<ILibraryClient>();

    var result = await client.App.ExecuteAsync();
    var data = result.Data;

    data.AllBooks.Nodes[0].MockedField = "test";
    var mockedData = data.AllBooks.Nodes[0].MockedField;

    var createUserResult = await client.CreateUser.ExecuteAsync("abcd");
    var createdUser = createUserResult.Data;
}

Testing

Because Strawberry Shake exposes a partial interface for all generated objects, we can easily mock our client wherever we inject it for use in tests; here is an example mock of our AppQuery call.

[Test]
public void MockAppQuery()
{
    var mockResult = new Mock<IOperationResult<IAppResult>>();
    mockResult.Setup(s => s.Data).Returns(new AppResult(
        new App_AllBooks_AllBooksConnection(new List<App_AllBooks_Nodes_Book>
        {
            new App_AllBooks_Nodes_Book(Guid.NewGuid(), "978-1617294532", "C# In Depth, Fourth Edition")
        })
    ));

    var mockAppQuery = new Mock<IAppQuery>();
    mockAppQuery.Setup(s => s.ExecuteAsync(It.IsAny<CancellationToken>()))
        .ReturnsAsync(mockResult.Object);

    var mockClient = new Mock<ILibraryClient>();
    mockClient.Setup(s => s.App).Returns(mockAppQuery.Object);

    // todo: act

    // todo: assert
}

When to Use

Use this client if you are calling a GraphQL API from a C# client; this could be if your backend calls Shopify, GitHub, or other GraphQL server, or if you are developing a Blazor website or WPF, Xamarin, or .NET Maui app.

Apollo

Apollo is the most popular GraphQL client for JS-based apps because of its ease of use; to achieve this, however, it does not enforce some of the ideals of GraphQL. These are discussed in more detail in the When to Use section. A completed demo based on the npx create-react-app library --template typescript template can be found at (5). To use this client, first, we need to install the latest @apollo/client and graphql packages with your package manager of choice; I use npm, since I am most familiar with it: npm i @apollo/client graphql Next, we need to create an ApolloClient instance; this client will tell our app where the graphql server lives and how to talk to it, and how to cache data. Put this in src/client.ts (This step, and many of the following steps, are demonstrated in the Apollo get-started docs at (6).)

import {
  ApolloClient,
  ApolloLink,
  HttpLink,
  InMemoryCache,
} from '@apollo/client'
import env from './env'

const httpLink = new HttpLink({
  uri: env.GraphQLEndpoint,
})

export const client = new ApolloClient({
  cache: new InMemoryCache(),
  link: ApolloLink.from([httpLink]),
})

Example environment variable configuration, which I like to place at src/env.ts:

const env = {
  GraphQLEndpoint: process.env.GRAPHQL_ENDPOINT || 'https://localhost:44377/graphql/'
}

export default env

Next, we will configure our GraphQL codegen system. Because we are using TypeScript and React, we would like to have our queries and mutations strongly typed and use hooks to perform our calls. While Apollo has a codegen system you can import, it has many bugs and I have not been able to get it working satifactorily; I prefer the @graphql-codegen library. To use this library, we will install our packages with npm i --save-dev @graphql-codegen/cli @graphql-codegen/typescript @graphql-codegen/typescript-react-apollo @graphql-codegen/typescript-operations Next, we will copy our schema, which can typically be found by using the server’s provided introspection, to data/schema.graphql and create a codegen.yml file at the root of the project:

schema: ./data/schema.graphql
documents: 'src/**/*.tsx'
generates:
  src/types-and-hooks.ts:
    plugins:
      - typescript
      - typescript-operations
      - typescript-react-apollo

Note that we tell it where our schema is (this could also be a URL pointing to a live server, which is useful when the schema is in active development), which documents to scan for scripts (you can also point it to .graphql files, if you would rather not have your queries and mutations in your .tsx files), where to put the generated code, and which plugins to use when generating it. Once we add "graphql:codegen": "graphql-codegen" to the scripts section of the package.json and run it, we will be able to write queries and use them with hooks in our React components. The best part about this library is if we were using Angular instead, for example, we could have used the typescript-apollo-angular plugin and nothing else would change around integrating with GraphQL other than the final usage (e.g. we would access it with dependency injection instead of hooks). Additional configuration options can be found at (7).

The next couple paragraphs are mostly React-focused; Apollo is not limited to working with React, so you can ignore the React-specific pieces if you are using a different frontend library.

Now we configure our ApolloProvider with an instance of our client in our App file. Now everything is set up, and we can use the generated hooks in our React components.

App.tsx
import { ApolloProvider, gql } from '@apollo/client'
import { useAppQuery } from './types-and-hooks'
import { client } from './client'
import BooksPage from './BooksPage'
import CreateUser from './CreateUser'

gql`
  query App {
    allBooks {
      ...BooksPage
    }
  }
`

// exported so we can access it for testing
export function App() {
  const { data, loading, error } = useAppQuery()

  if (error) {
    return <div>Error!</div>
  }

  if (loading) {
    return <div>Loading...</div>
  }

  return (
    <div className="App container">
      <CreateUser />
      <BooksPage data={data?.allBooks} />
    </div>
  )
}

function AppRoot() {
  return (
    <ApolloProvider client={client}>
      <App />
    </ApolloProvider>
  )
}

export default AppRoot
BooksPage.tsx
import { gql } from '@apollo/client'
import { BooksPageFragment } from './types-and-hooks'

gql`
  fragment BooksPage on AllBooksConnection {
    nodes {
      id
      isbn
      name
    }
  }
`

interface Props {
  query: BooksPageFragment | null
}

function BooksPage({ query }: Props) {
  return (
    <div className="BooksPage">
      {query?.nodes?.map((m) => (
        <div key={m.id}>
          {m.name} - {m.isbn}
        </div>
      ))}
    </div>
  )
}

export default BooksPage
CreateUser.tsx
import { gql } from '@apollo/client'
import { useCreateUserMutation } from './types-and-hooks'

gql`
  mutation CreateUser($username: String!) {
    createUser(name: $username) {
      id
      name
    }
  }
`

function CreateUser() {
  const [command] = useCreateUserMutation()

  const createUser = () =>
    command({
      variables: {
        username: 'asdf',
      },
    })

  return (
    <div className="CreateUser">
      <button style={{ float: 'right' }} onClick={createUser}>
        Create User
      </button>
    </div>
  )
}

export default CreateUser

Note: that the final solution I shared differs slightly so it runs against the final version of the server after we update it to follow Relay conventions (the GraphQL gold standard client) at the end of this file. The way I wrote it here reflects the state of the server at the end of my previous post (1).

Testing

Apollo can be tested very easily. First, we need to create an array of mock responses. Each mock will specify which query the mock is for and what data is returned. We will now pass these mocks to Apollo’s MockedProvider If we do not trigger our UI updates to process with an await act call, it is in the loading state; after that, it either sets the data or error depending what our mock returned. If we were not using fragments, we could set addTypename={false} on the MockedProvider and leave out the __typename fields in our mocks to make things simpler.

import { AppDocument, BooksPageFragment } from './types-and-hooks'
import { App } from './App'
import { MockedProvider, MockedResponse } from '@apollo/client/testing'
import { screen, render, act } from '@testing-library/react'

const mocks: MockedResponse[] = [
  {
    request: {
      query: AppDocument,
    },
    result: {
      data: {
        allBooks: {
          __typename: 'AllBooksConnection',
          nodes: [
            {
              __typename: 'Book',
              id: 1,
              isbn: '978-1617294532',
              name: 'C# In Depth, Fourth Edition',
            },
            {
              __typename: 'Book',
              id: 2,
              isbn: '978-1617295683',
              name: 'GraphQL in Action',
            },
          ] as BooksPageFragment,
        },
      },
    },
  },
]

const errMocks: MockedResponse[] = [
  {
    request: {
      query: AppDocument,
    },
    error: new Error('An error occurred'),
  },
]

it('renders loading state', () => {
  render(
    <MockedProvider mocks={mocks}>
      <App />
    </MockedProvider>,
  )

  const domPiece = screen.getByText('Loading...')
  expect(domPiece).toBeInTheDocument()
})

it('renders book list', async () => {
  render(
    <MockedProvider mocks={mocks}>
      <App />
    </MockedProvider>,
  )

  await act(async () => await new Promise((resolve) => setTimeout(resolve, 0)))

  const domPiece = screen.getByText('C# In Depth, Fourth Edition - 978-1617294532')
  expect(domPiece).toBeInTheDocument()
})

it('renders error state', async () => {
  render(
    <MockedProvider mocks={errMocks}>
      <App />
    </MockedProvider>,
  )

  await act(async () => await new Promise((resolve) => setTimeout(resolve, 0)))

  const domPiece = screen.getByText('Error!')
  expect(domPiece).toBeInTheDocument()
})

Caching

Apollo comes with a built-in cache to help minimize network calls. It populates items that it can build a cache id for when you perform a query, and will update known items with the response from a mutation; it will not insert new items from a mutation’s response, however. By default, cache items are generated using the __typename and id or _id field, but this can be customized by setting the typePolicies For example, if I wanted to use the isbn field instead of the id field as my cache key (the __typename field is always used), I could use this:

const cache = new InMemoryCache({
  typePolicies: {
    Book: {
      keyFields: ["isbn"],
    },
  },
});

An excellent discussion of cache manipulation can be found at (8).

When to Use

I prefer this framework when building a non-React JS frontend, such as an Angular UI, but be careful to not treat each query endpoint as a single call, like a REST API would, and be careful when choosing your cache strategy to keep your UI quick and responsive while still displaying the correct information.

Relay

Relay is a GraphQL client built for React by Facebook. It is a little more confusing to learn than Apollo, partially because it heavily relies on fragments, rather than simply building and making the calls you need directly. Its benefit, however, is that each component declares which fields of which types it needs, and the app makes a single call to the server when it loads or navigates to a new page. A completed demo based on the npx create-react-app library --template typescript template can be found at (9).

First, we need to install the required dependencies with npm i relay-runtime react-relay and npm i --save-dev relay-compiler babel-plugin-relay @types/relay-runtime @types/react-relay Then add a section to the package.json to call the relay-compiler tool and some configuration values so it can generate the code correctly. Other configuration parameters can be found at (10).

"scripts": {
  "relay": "relay-compiler"
},
"relay": {
  "src": "./src",
  "schema": "./data/schema.graphql",
  "language": "typescript"
},

Now that we have Relay installed, we need the schema to the API our app is querying; this can typically be found by using the server’s provided introspection, unless it is not published, in which case you probably are not supposed to be calling the server. Once you have this, place it in your project at data/schema.graphql

Next, we will declare a TypeScript definition so we can use it without compiler errors; I placed this in src/types.d.ts

declare module 'babel-plugin-relay/macro' {
  export { graphql } from 'react-relay'
}

Now we will set up our environment variables at src/env.ts:

const env = {
  GraphQLEndpoint: process.env.GRAPHQL_ENDPOINT || 'https://localhost:44377/graphql/'
}

export default env

Next we have to write a couple tools for Relay to tie into the server with; I put this at src/relay-env.ts

import {
    Environment,
    Network,
    RecordSource,
    RequestParameters,
    Store,
    Variables
  } from 'relay-runtime'
import env from './env'

const url = env.GraphQLEndpoint

function fetchQuery(
  operation: RequestParameters,
  variables: Variables,
) {
  return fetch(url, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      query: operation.text,
      variables,
    }),
  }).then(response => {
    return response.json();
  });
}

const environment = new Environment({
  network: Network.create(fetchQuery),
  store: new Store(new RecordSource()),
});

export default environment;

Finally, we are ready to write some React components. First, we will write our App function:

import { Environment, QueryRenderer } from 'react-relay'
import defaultEnvironment from './relay-env'
import type {
  App_Query,
  App_Query$data,
} from './__generated__/App_Query.graphql'
import { graphql } from 'babel-plugin-relay/macro'
import BooksPage from './BooksPage'
import CreateUser from './CreateUser'

const query = graphql`
  query App_Query {
    allBooks {
      ...BooksPage_query
    }
  }
`

interface Props {
  error: Error | null
  props: App_Query$data | null
}

export function App({ error, props }: Props) {
  if (error) {
    return <div>Error!</div>
  }

  if (!props) {
    return <div>Loading...</div>
  }

  return (
    <div className="App container">
      <CreateUser />
      <BooksPage query={props.allBooks} />
    </div>
  )
}

export interface AppRootProps {
  environment?: Environment
}

function AppRoot({ environment }: AppRootProps) {
  // note: QueryRenderer<App_Query> is actually correct; it's a generic type that uses a Babel plugin like the graphql`` tags
  return (
    <QueryRenderer<App_Query>
      environment={environment ?? defaultEnvironment}
      query={query}
      render={(renderProps) => <App {...renderProps} />}
      variables={{}}
    />
  )
}

export default AppRoot

Tip: to get this working, first write your graphql tags for this and the below components, run npm run relay to generate the __generated folder, then write the rest of the code. If you try to paste this content as-is, npm run relay will not work because it is referencing missing components.

The query is the root query of the app. Unlike Apollo, which works similar to a REST API where you can make many queries as you go, Relay only has one root query that pulls the data for all child components. Note that we are using the TypeScript graphql definition we declared above; this is how the relay-compiler tool determines the GraphQL scripts it needs to generate. Here, we declare our root query with any arguments required, specify which object we are querying, and reference the fragment declared by a child component. Note also the genericly-typed QueryRenderer ; this gives the type system the info it needs to type the render and variables props.

Our App is just a React component; because it is our root render node within the QueryRenderer with arguments to represent query errors and props, which are just the query response nodes. Here, we are passing the props.allBooks piece into our child component.

Next, update the index.tsx file to call AppRoot instead of App and pass it the environment we defined in the relay-env file:

root.render(
  <React.StrictMode>
    <AppRoot />
  </React.StrictMode>
);

Now we will look at the BooksPage component. Here, we define our React component with a query argument, which is the GraphQL query result; note that we cannot just access anything from the query; Relay checks that we only reference what we declare we are referencing and throws an error if we try to reference a field a different component requested. For example, we could add the field publishedOn to the query in the parent component: publishedOn ...BooksPage_query If we then cast query to an any and tried to reference query.publishedOn inside the BooksPage component, we would get a runtime error; the interface is just defined so it will be fully typed. Note that the type of the query property is BooksPage_query$key , similar to our fragment name; the relay-compiler generates a type for us from our fragment.

import { useFragment } from 'react-relay'
import { BooksPage_query$key } from './__generated__/BooksPage_query.graphql'
import { graphql } from 'babel-plugin-relay/macro'
interface Props {
  query: BooksPage_query$key | null
}
function BooksPage({ query }: Props) {
  const data = useFragment(
    graphql`
      fragment BooksPage_query on AllBooksConnection {
        nodes {
          id
          isbn
          name
        }
      }
    `,
    query,
  )
  return (
    <div className="BooksPage">
      {data?.nodes?.map((m) => (
        <div key={m.id}>
          {m.name} - {m.isbn}
        </div>
      ))}
    </div>
  )
}
export default BooksPage

Our CreateUser component is simply a button that calls a mutation to create a user with a hard-coded username when it is clicked (more details on advanced usage of mutations can be found at (11)):

import { useMutation } from 'react-relay'
import { graphql } from 'babel-plugin-relay/macro'
function CreateUser() {
  const [command] = useMutation(graphql`
    mutation CreateUserMutation($username: String!) {
      createUser(name: $username) {
        id
        name
      }
    }
  `)
  const createUser = () =>
    command({
      variables: {
        username: 'asdf',
      },
    })
  return (
    <div className="CreateUser">
      <button style={{ float: 'right' }} onClick={createUser}>
        Create User
      </button>
    </div>
  )
}
export default CreateUser

Routing

Because Relay only does one query per route and you do not control where or when the query happens, we need to consider our routing. For web apps with a single route or a flat routing structure, we can simply use a QueryRenderer on each rendered page-level component as I did above. If our app had a tree of routes, and I used a QueryRenderer on each route-level component, Relay would be unable to render the child routes until the parent route data request was resolved and the component was rendered, leading to delays as discussed at (12). We can resolve this by using the Found router and Found-Relay to perform all the data requests simultaneously. This will be left as an exercise for the reader.

Testing

Testing with Relay is quite simple, although slightly different depending on whether we are testing a root component or a component that consumes a fragment. To test our root AppRoot component, we update it to take an optional environment prop, then write our tests:

import App from './App'
import { createMockEnvironment, MockPayloadGenerator } from 'relay-test-utils'
import ReactTestRenderer from 'react-test-renderer'
test('Loading State', () => {
  const environment = createMockEnvironment()
  const renderer = ReactTestRenderer.create(
    <App environment={environment} />,
  )
  expect(
    renderer.root.find(node => node.children[0] === 'Loading...'),
  ).toBeDefined()
})
test('Data Render', () => {
  const environment = createMockEnvironment()
  const renderer = ReactTestRenderer.create(
    <App environment={environment} />,
  )
  ReactTestRenderer.act(() => {
    environment.mock.resolveMostRecentOperation(operation =>
      MockPayloadGenerator.generate(operation),
    )
  })
  expect(
    renderer.root.find(node => node.props.className === 'BooksPage'),
  ).toBeDefined()
})
test('Error State', () => {
  const environment = createMockEnvironment()
  const renderer = ReactTestRenderer.create(
    <App environment={environment} />,
  )
  ReactTestRenderer.act(() => {
    environment.mock.rejectMostRecentOperation(new Error('Uh-oh'))
  })
  expect(
    renderer.root.find(node => node.children[0] === 'Error!'),
  ).toBeDefined()
})

To test our BooksPage component that takes a fragment (remember to run the relay compiler again to pick up the new graphql query):

import BooksPage from './BooksPage'
import { createMockEnvironment, MockPayloadGenerator } from 'relay-test-utils'
import ReactTestRenderer from 'react-test-renderer'
import { RelayEnvironmentProvider, useLazyLoadQuery } from 'react-relay'
import { graphql } from 'babel-plugin-relay/macro'
import { Suspense } from 'react'
import { BooksPage_TestQuery } from './__generated__/BooksPage_TestQuery.graphql'
test('Renders book names', () => {
  const environment = createMockEnvironment()
  environment.mock.queueOperationResolver((operation) =>
    MockPayloadGenerator.generate(operation, {
      AllBooksConnection() {
        return {
          __typename: 'AllBooksConnection',
          nodes: [
            {
              id: '1',
              isbn: '123-123456789',
              name: 'Book 1',
            },
            {
              id: '2',
              isbn: '987-123456789',
              name: 'Book 2',
            },
          ],
        }
      },
    }),
  )
  const TestRenderer = () => {
    const data = useLazyLoadQuery<BooksPage_TestQuery>(
      graphql`
        query BooksPage_TestQuery @relay_test_operation {
          allBooks {
            ...BooksPage_query
          }
        }
      `,
      {},
    )
    return <BooksPage query={data.allBooks} />
  }
  const renderer = ReactTestRenderer.create(
    <RelayEnvironmentProvider environment={environment}>
      <Suspense fallback="Loading...">
        <TestRenderer />
      </Suspense>
    </RelayEnvironmentProvider>,
  )
  
  expect(renderer).toMatchSnapshot()
})

Additional details can be found at (13).

When to Use

This is my preferred client for React frontends because it is easy to use and strongly encourages the correct pattern of a single query per defined route. However, there are a few catches you have to watch for because it is the most opinionated of all the clients. First, the server must provide a way to refetch any node given an id, and second, the server must provide a way to page through connections (14), which is is typically done by implementing the following spec. Note that while my server-side blog post covers the second, it does not support the first; see the next section for how we update our server to fully support Relay.

interface Node {
  id: ID!
}
type PageInfo {
  hasNextPage: Boolean!
  hasPreviousPage: Boolean!
  startCursor: String
  endCursor: String
}
""" Example connection type """
type MyConnection {
  edges: [MyEdge]
  pageInfo: PageInfo!
}
type Query {
  node(id: ID!): Node
}

Updating the Server

Hot Chocolate provides excellent support for Relay; however, I did not fully implement that in my last post. I did use their paging support where applicable, so we do not need change that; we do need to implement the universal ID, however. First, we will add .AddGlobalObjectIdentification() to our services.AddGraphQLServer() chain where we register our queries and mutations; this adds the middlware to convert our ids back and forth. Next, we will add the HotChocolate.Types.Relay.NodeAttribute attribute to our response type; this tells Hot Chocolate which pieces implement the Node interface. If our type did not have an Id property, we would need to put the IDAttribute on our id property, but we do not need to do this here because I followed their naming conventions. Next, we need to add a static method on each type that implements the Node interface; this method will take an id parameter of the same type as our Id property, as many [Service] parameters as we need, and will return either a T or Task , where T is the node type. This method should be named either Get , GetAsync , Get{T} , or Get{T}Async , where {T} is the name of the node type (e.g. GetBookAsync ); if we do not follow the naming convention, we can set the NodeResolverAttribute on the method. This is my implementation for the Book type:

public static async Task<Book?> GetAsync(Guid id, [Service] IBookApplication bookApp)
{
    return await bookApp.Get(id);
}
Finally, we need to apply the [ID] attribute to our queries that take an Id parameter in as well:

public IExecutable<Book> GetBook(
    [Service] IMongoCollection<Book> collection, [ID] Guid id)
{
    if (!featureFlags.EnableBook)
    {
        throw new QueryException("Query not implemented");
    }
    return collection.Find(x => x.Id == id).AsExecutable();
}

Now our server queries are Relay-compliant with minimal work on our end. The final thing we need to do is instead of accepting a list of parameters into and returning our query objects directly from our mutations, we need to use Input and Payload parameters in our mutations as shown in the following schema definition:

# Old
type Mutation {
  checkoutBook(userId: UUID!, bookId: UUID!): User
}
# New
input CheckoutBookInput {
  userId: UUID!
  bookId: UUID!
}
type CheckoutBookPayload {
  user: User
}
type Mutation {
  checkoutBook(input: CheckoutBookInput!): CheckoutBookPayload!
}

Making this change is trivial, so I will leave it to the reader as an exercise. Do note that if you add .AddQueryFieldToMutationPayloads() to the server definition, it will add an additional query: Query field to your mutation payloads; this is so we can pull as all updated fields at once. Our checkout book command could then be:

mutation CheckoutBook($input: CheckoutBookInput, $bookId: UUID!) {
  checkoutBook(input: $input) {
    user {
      id name
    }
    query {
      book(id: $bookId) {
        id name isbn
      }
    }
  }
}

Find the full details about adding Relay support to a Hot Chocolate server at (15).

References

  1. Previous blog post: https://superdevelopment.com/2022/11/10/graphql-server-with-hot-chocolate/
  2. GraphQL Server: https://github.com/Hosch250/Library-DDD/tree/graphQLHotChocolate
  3. Strawberry Shake Client Demo: https://github.com/Hosch250/Library-DDD/tree/graphQLHotChocolate/GraphQLClient
  4. Get Started with Strawberry Shake: https://chillicream.com/docs/strawberryshake/get-started
  5. Apollo Client Demo: https://github.com/Hosch250/graphql-web-clients/tree/main/apollo
  6. Get Started with Apollo: https://www.apollographql.com/docs/react/get-started
  7. GraphQL Code Generator: https://www.graphql-code-generator.com/docs/guides/react
  8. Apollo Cache: https://medium.com/rbi-tech/tips-and-tricks-for-working-with-apollo-cache-3b5a757f10a0
  9. Relay Client Demo: https://github.com/Hosch250/graphql-web-clients/tree/main/relay
  10. Relay Compiler: https://github.com/facebook/relay/blob/main/packages/relay-compiler/README.md
  11. Relay Mutations: https://relay.dev/docs/guided-tour/updating-data/graphql-mutations/
  12. Relay Routing: https://relay.dev/docs/v1.6.1/routing/
  13. Testing Relay: https://relay.dev/docs/guides/testing-relay-components/
  14. Relay Server Specification: https://relay.dev/docs/guides/graphql-server-specification/
  15. Relay with Hot Chocolate: https://chillicream.com/docs/hotchocolate/defining-a-schema/relay

GraphQL Server with Hot Chocolate

In my last blog post (1), we implemented a REST API with DDD. This blog post will detail how to add a GraphQL endpoint to this API. If you have never implemented GraphQL, or have never done it with Hot Chocolate, this blog post is for you. You can find the completed source code on GitHub (2).

I chose Hot Chocolate for this because GraphQL .NET, the other alternative, has much less support and requires explicitly defining the schema, whereas Hot Chocolate can infer it from your types, as well as explicitly defining it, if you wish to take the time to do that. In addition to this, Hot Chocolate has a better parser (it passes Facebook’s smoke tests, while GraphQL .NET’s parser does not), it has better performance, better data loaders (I will cover what a Data Loader is below), and supports schema stitching (where you combine multiple GraphQL endpoints into a single schema), among other things (3). This blog post starts where my last one, Domain-Driven Design, leaves off; a REST API implemented with DDD.

Before we get started, there are a couple things to note. First, GraphQL is a graph query language. The definition of “graph” it is using is “a collection of vertices and edges that join pairs of vertices” (4). In this definition, our vertices are entities (e.g. a record in a DB), and our edges are the relationships between entities.

Next, GraphQL always returns a 200 response, regardless of errors. On the server-side, which this blog post covers, this primarily means you will add errors to the response in your error handlers, rather than setting the response code in a global exception handler. Error details are returned via an error field, which can be defined on the root of the response, such as a network error, inside the data field with details on a failed top-level query, or on an individual field within a response.

The reason you would use GraphQL over REST or another protocol is the ability to specify exactly what data you need for an entire page (or section of page). With our REST implementation in my last post, we had to load a user and the details on their checked out books sequentially, because we only returned the checked out book ids on the user object. With GraphQL, we can define a relationship there so the user can pull all the details in a single request; unlike REST, adding these fields does not add any overhead, because the user specifies exactly which fields they want returned in the query. Additionally, we can perform multiple root queries with a single request simply by including them in the query; REST forces use to make one request per query. Combining all of these, we can significantly reduce the network requests our apps and websites make and completely eliminate all unused data transferred, which can significantly improve performance on slow networks.

Setting up Hot Chocolate

First, we need to add the Hot Chocolate packages; I am using version 12.0.1 in this blog post of the following two packages:

  • HotChocolate
  • HotChocolate.AspNetCore

The first step to migrating a REST API to a GraphQL server with Hot Chocolate is creating a query type.

public class Query {}

Next we will add the GraphQL middleware to the Startup.cs file.

services
     .AddGraphQLServer()
     .AddQueryType<Query>();

And use the middleware.

app.UseEndpoints(endpoints => {
     endpoints.MapGraphQL();
     endpoints.MapBananaCakePop();
});

Now that we have everything registered, we can start adding our query endpoints (the GET endpoints in REST). For this first step, we will simply do the same implementation as our old REST controllers.

private readonly FeatureFlags featureFlags;
private readonly IBookApplication bookApplication;

public Query(IOptions<FeatureFlags> featureFlags, IBookApplication bookApplication)
{
    this.featureFlags = featureFlags.Value;
    this.bookApplication = bookApplication;
}

public async Task<List<Book>> GetAllBooks()
{
    if (!featureFlags.EnableBook)
    {
        throw new NotImplementedException("Query not implemented");
    }

    var books = await bookApplication.GetAll();
    return books;
}

Now we can run this query in at the /GraphQL endpoint.

query {
    allBooks {
        id
        isbn
        name 
        publishedOn   
        authors { name }
        publisher { name }
    }
}

Banana Cake Pop is Hot Chocolate’s in-browser query tool; it replaces the original GraphiQL query viewer/editor most GraphQL servers provide, and is a Postman-style tool to support building GraphQL queries and mutations. Now we can go to /GraphQL, see Banana Cake Pop, and run the query above and it will work. Adding the rest of the GET endpoints is equally easy. Adding the PUT, POST, and DELETE endpoints is as simple as adding a Mutation class and adding .AddMutationType<Mutation>() after .AddQueryType<Query>().

Before you jump in and start making these changes to your app, note that you can only register one Query and one Mutation type. This does not mean you must add all your query and mutation types to the same file—there could be hundreds of these for a large API, which would get very messy. To split these into multiple files, just use the type extension feature.

public class Query { }

[ExtendObjectType(typeof(Query))]
public class QueryBookResolvers
{
    private readonly FeatureFlags featureFlags;
    private readonly IBookApplication bookApplication;

    public QueryBookResolvers(IOptions<FeatureFlags> featureFlags, IBookApplication bookApplication)
    {
        this.featureFlags = featureFlags.Value;
        this.bookApplication = bookApplication;
    }

     public async Task<List<Book>> GetAllBooks()
    {
        if (!featureFlags.EnableBook)
        {
            throw new NotImplementedException("Query not implemented");
        }

        var books = await bookApplication.GetAll();
        return books;
    }
}

Then register each type extension, and it will all work correctly.

services
    .AddGraphQLServer()
    .AddQueryType<Query>()
    .AddTypeExtension<QueryBookResolvers>()
    .AddTypeExtension<QueryUserResolvers>();

Error Handling

Now that we have our operations set up, we need an error filter. Hot Chocolate does not expose errors to the client when not in production mode for the same reasons most REST API servers automatically block them when in production mode. However, this prevents us from giving useful errors to our users when they make a mistake. To resolve this, we will implement the IErrorFilter interface, check if the error is because of a validation exception, and if so, expose our error message and set the code to Validation to provide context to the message.

public class ValidationErrorFilter : IErrorFilter
{
    public IError OnError(IError error)
    {
        if (error.Exception is not ValidationException validationException)
        {
            return error;
        }

        var errors = new List<IError>();
        foreach (var err in validationException.Errors)
        {
            var newErr = ErrorBuilder.New()
                .SetMessage(err.ErrorMessage)
                .SetCode("Validation")
                .Build();

            errors.Add(newErr);
        }

        return new AggregateError(errors);
    }
}

Finally, we need to register our filter with .AddErrorFilter<ValidationErrorFilter>(). Now, our queries will display a reasonable response when there are errors, instead of simply a message "Unexpected Execution Error" when we throw a ValidationException when we perform validations inside our entities or applications. Because we used an aggregate error, we can split each validation error into its own error inside our filter as well, instead of having one long error message with multiple error states or just showing the first message.

{
   "errors": [
    {
      "message": "User must have a name",
      "extensions": {
        "code": "Validation"
      }
    }
  ],
  "data": {
    "createUser": null
  }
}

At this point, you will want to add an error filter for NotImplementedException or change the exception thrown in our operations to be a QueryException as well, which is automatically handled by GraphQL. For further validation, there is a library called Fairybread (5) you can use to automatically validate incoming input objects, but I did not think it was at a good point for production use yet; it only reported the first error found and returned too much information to the caller, including the fact that we were using Fairybread and FluentValidation for validation. If these are not issues for your use case, feel free to try it out.

Creating a Graph

Part of the reason to use GraphQL is so clients can pull all related data in a single query. Our user query currently only returns the book id, but as a GraphQL implementation, our users expect to be able to retrieve book information from the relationship between the user and a checked-out book. Without this ability, we are not really defining a graph of vertices joined by edges so much as a simple list of vertices. To fix this, we will now write a resolver to override that field and return an object that contains both the checked out date, the return date, and the fields from the main book type.

[ExtendObjectType(typeof(User))]
public class UserExtensions
{
    private readonly IBookApplication bookApplication;

    public UserExtensions(IBookApplication bookApplication)
    {
        this.bookApplication = bookApplication;
    }

    [BindMember(nameof(User.Books))]
    public async Task<IReadOnlyList<CheckedOutBookDetails>> GetBooks([Parent] User user)
    {
        var books = new List<CheckedOutBookDetails>();
        foreach (var book in user.Books)
        {
            var bookDetails = await bookApplication.GetBook(book.BookId);
            return new CheckedOutBookDetails(bookDetails.Id, bookDetails.Isbn, bookDetails.Name, bookDetails.PublishedOn, bookDetails.Publisher, bookDetails.Authors, book.CheckedOutOn, book.ReturnBy);
        }

        return books;
    }
}

Once we register this new resolver in our startup file with .AddTypeExtension<UserExtensions>(), we have a fully functional GraphQL server implementation. We can also use extensions to add new fields entirely (as such, I could have defined many extension functions on the CheckedOutBook type instead of creating a CheckedOutBookDetails type), and also prevent fields from being resolved (6). However, there is still a bit more work we can do. Look at this query and see if you can identity the issues:

query {
  user(id: "e796b1ed-dce1-4302-9d74-c5a543f8cae6") {
    id
    name
    books {
      id name
    }
  }

  u1: user(id: "e2087ec5-8caf-4969-91ce-5c39fc378afc") {
    id
    name
    books {
      id name
    }
  }
}

First, we have two root objects that are hitting the same table; if we had those ids before we started resolving the data, we could combine that into a single database query. If both queries were using the same user id, we could resolve all the data once and reuse it for the second query as well. In this case, it is not a huge concern, but we could maliciously construct a query to pull a very large number of data repeatedly, which could lock other requests out of our database while it resolves it. Second, we have an N+1 problem in each root query; we are resolving the user (1 query), then returning to the database once for each book the user has checked out (N queries). We can resolve both of these issues and reduce our database requests down to two no matter how much data the user needs with the use of a Data Loader.

Data Loaders

Data loaders take a list of keys and resolve all of them at the same time; since GraphQL knows which user ids we are querying for immediately, it can combine the two user queries into a single query. Once it gets the users back, it can pull the book ids from the users and resolve them in a single database query as well. Here is an example of our Books field with the data loader:

[BindMember(nameof(User.Books))]
public async Task<IReadOnlyList<CheckedOutBookDetails>> GetBooks([Parent] User user, IResolverContext context)
{
    var books = await context.BatchDataLoader<Guid, Book>(
        async (keys, ct) =>
        {
            var books = await bookApplication.GetBooks(keys);
            return books.ToDictionary(x => x.Id);
        })
    .LoadAsync(user.Books.Select(s => s.BookId).ToList());

    return books.Select(s => {
        var book = user.Books.Single(t => t.BookId == s.Id);
        return new CheckedOutBookDetails(s.Id, s.Isbn, s.Name, s.PublishedOn, s.Publisher, s.Authors, book.CheckedOutOn, book.ReturnBy);
    }).ToList();
}

Note that we inject the resolver context into this method; Hot Chocolate will inject that into any resolver method for us; we do not need to configure it anywhere. Note also that we still fully control our database access. If we had an issue with querying too many ids at once, we could build our response dictionary from batches of N items per database query rather than all loading data for all ids at once. To support this, I added the following implementations to our library context and book application to query multiple books by id in one operation. See if you can use these as a reference to change the GetUser method on our query resolver to use a Data Loader.

public async Task<IReadOnlyList<Book>> GetBooksAsync(IReadOnlyList<Guid> ids)
{
    return await libraryContext.Book.AsQueryable()
        .Where(f => ids.Contains(f.Id))
        .ToListAsync();
}

public async Task<IReadOnlyList<ApiContracts.Book>> GetBooks(IReadOnlyList<Guid> ids)
{
    var users = await libraryRepository.GetBooksAsync(ids);
    return users.Select(mapper.Map<Book, ApiContracts.Book>).ToList();
}

Testing

Now that our server is done, we need to write tests for it. If you are not using the data loaders, you can simply write unit tests around your resolver methods. Using the data loaders with mocks quickly becomes a nuisance due to the sheer amount of mocking needed, but writing integration tests is still easy. First, we set up our service collection and build a service provider; we can tie into our Startup.ConfigureServices method to handle most of the work with this. Now we just call ExecuteRequestAsync with our query as a string parameter; we can then assert against the error object on the result or call ToJson() on it and either assert against the JSON directly or deserialize the result into an object to test against.

[Fact]
public async Task ReturnsUser()
{
    var options = new Dictionary<string, string>
    {
        ["FeatureFlags:EnableUser"] = bool.TrueString,
        ["ConnectionStrings:Database"] = "mongodb://localhost"
    };

    var config = new ConfigurationBuilder().AddInMemoryCollection(options);

    var services = new ServiceCollection();
    services.AddSingleton<IConfiguration>(config.Build());
    new Startup(config.Build()).ConfigureServices(services);
    var serviceProvider = services.BuildServiceProvider();

    var builder = await serviceProvider.ExecuteRequestAsync(
@"query {
  user(id: ""e796b1ed-dce1-4302-9d74-c5a543f8cae6"") {
    id name books { id name }
  }
}");

    var result = builder.ToJson();
    var expected = @"{
  ""data"": {
    ""user"": {
      ""id"": ""e796b1ed-dce1-4302-9d74-c5a543f8cae6"",
      ""name"": ""Abraham Hosch"",
      ""books"": [
        {
          ""id"": ""30558e66-f0df-4dcd-aa96-1b3d329f1b86"",
          ""name"": ""C# in Depth: 4th Edition""
        },
        {
          ""id"": ""0a08e8df-b71e-4300-9683-bd4a1b7bcaf1"",
          ""name"": ""Dependency Injection Principles, Practices, and Patterns""
        }
      ]
    }
  }
}";

    Assert.Equal(expected, result);
}

Conclusion

Now we have a functioning GraphQL server complete with error handling and probably better performance under load than our original REST API thanks to the data loaders and fewer requests to construct a full graph of data. There are still a few more things to consider before we are complete, however, primarily around security.

GraphQL has some attack vectors REST APIs avoid, including:

  • Exposing the entire query and response structure to all clients
  • Potentially deeply nested queries that take a long time to resolve, such as an arbitrarily deep friends of friends relationship
  • Fields that are performance intensive to resolve which can be queried multiple times in a single request

For this API, I do not mind that anyone can read our type schema, but if I did, could disable introspection for all unauthorized users (7) the same way I could require authorization for specific operations and/or fields. The simplest way to resolve the other two is to set a timeout to prevent slow operations from killing performance; Hot Chocolate defaults to a 30-second timeout. If necessary, we can also define complexity values (8) for operations and block execution of requests if their computed complexity is higher than our assigned limit. We could, for example, prevent any queries nested 5 levels or deeper, and not allow multiple root queries in a request when running a an expensive operation by setting the complexity for these operations to or above the maximum allowed value.

Reference

  1. Previous blog post: https://superdevelopment.com/2021/09/24/domain-driven-design/
  2. Source code: https://github.com/Hosch250/Library-DDD/tree/hotChocolateBlog
  3. Discussion of Hot Chocolate vs GraphQL .NET: https://github.com/ChilliCream/hotchocolate/issues/392#issuecomment-571733745
  4. Definition of a Graph: https://www.merriam-webster.com/dictionary/graph
  5. Fairybread: https://github.com/benmccallum/fairybread
  6. Extending a schema: https://chillicream.com/docs/hotchocolate/defining-a-schema/extending-types
  7. Introspection: https://chillicream.com/docs/hotchocolate/server/introspection
  8. Operation Complexity: https://chillicream.com/docs/hotchocolate/security/operation-complexity

Domain-Driven Design

You’ve decided to use Domain-Driven Design (DDD), but aren’t sure how to implement it. Maybe you’ve seen it go wrong before and aren’t sure how to prevent that happening again. Maybe you’ve never done it and aren’t sure where to start. This post will show you how to implement a DDD domain layer, including aggregates , value objects, domain commands, and validation, and how to avoid some of the pitfalls I’ve seen. It will not discuss the why of DDD vs other competing patterns; nor, for the sake of brevity, will it discuss the infrastructure or application layers of a DDD app. To demonstrate these concepts in action, I have built a backend for a library using DDD; the most relevant sections will be shown in the post, and the full version can be found on GitHub. The tech stack I used is an ASP.NET Core API written in C# backed by a Mongo DB.

The Aggregate Root

The aggregate root is the base data entity of a data model. This entity will contain multiple properties, which may be base CLR types or value objects. Value objects can be viewed as objects that are owned by the aggregate root. Each object, whether an aggregate root or value object, is responsible for maintaining its state. We will start by defined an abstract aggregate root type with properties all our aggregate roots will have:

public abstract class AggregateRoot
{
    public string AuditInfo_CreatedBy { get; private set; } = "Library.Web";
    public DateTime AuditInfo_CreatedOn { get; private set; } = DateTime.UtcNow;

    public void SetCreatedBy(string createdBy)
    {
        AuditInfo_CreatedBy = createdBy;
    }
}

Next, we will define an implementation of this type containing a couple internal constructors, a number of data properties, and a couple methods for updating the data properties. Looking through the implementation below, you will probably note that my data properties have private setters and methods for setting them. This looks a little strange when you consider that properties allow custom setters, but the reason for this is serialization. When we deserialize an object from our DB, we don’t want to have to go through any validation we might do when setting a property; we just want to read into the property and assume the data has already been validated. When the data changes, we need to validate it, so we make the property setters private and provide public methods to set the data. Another benefit the methods provide is you can pass a domain command to them, instead of just the final expected value of the property; this allows you to provide supplemental information as necessary.

public class User : AggregateRoot
{
    /// <summary>
    /// Used for deserialization
    /// </summary>
    [BsonConstructor]
    internal User(Guid id, string name, bool isInGoodStanding, List<CheckedOutBook> books)
    {
        Id = id;
        Name = name;
        IsInGoodStanding = isInGoodStanding;
        this.books = books;
    }

    /// <summary>
    /// Used by the UserFactory; prefer creating instances with that
    /// </summary>
    internal User(string name)
    {
        Id = Guid.NewGuid();
        Name = name;
        IsInGoodStanding = true;
    }

    public Guid Id { get; private set; }
    public string Name { get; private set; }
    public bool IsInGoodStanding { get; private set; }

    [BsonElement(nameof(Books))]
    private readonly List<CheckedOutBook> books = new();
    public IReadOnlyCollection<CheckedOutBook> Books => books.AsReadOnly();

    public async Task CheckoutBook(CheckoutBookCommand command)
    {
        // validation happens in any event handler listening for this event
        // e.g. Does the library have this book, is it available, etc.
        await DomainEvents.Raise(new CheckingOutBook(command));

        var checkoutTime = DateTime.UtcNow;
        books.Add(new CheckedOutBook(command.BookId, checkoutTime, checkoutTime.Date.AddDays(21)));
        DomainEvents.Raise(new CheckedOutBook(command));
    }

    public async Task ReturnBook(ReturnBookCommand command)
    {
        // validation happens in any event handler listening for this event
        // e.g. Does the user have this book checked out, etc.
        await DomainEvents.Raise(new ReturningBook(command));

        books.RemoveAll(r => r.BookId == command.BookId);
        DomainEvents.Raise(new ReturnedBook(command));
    }
}

public class CheckedOutBook
{
    public CheckedOutBook(Guid bookId, DateTime checkedOutOn, DateTime returnBy)
    {
        BookId = bookId;
        CheckedOutOn = checkedOutOn;
        ReturnBy = returnBy;
    }

    public Guid BookId { get; private set; }
    public DateTime CheckedOutOn { get; private set; }
    public DateTime ReturnBy { get; private set; }
}

Having POCOs or dumb objects (objects that aren’t responsible for maintaining their internal state) is often one of the first mistakes people make when doing DDD. They will create a class with public getters and setters and put their logic in a service (I will go over domain services and why you don’t usually want to use them later). The problem with this is that two places might be working with the same object instance at the same time and write data that the other is reading or writing, so the object risks ending up in an inconsistent state. DDD prevents inconsistent state by only allowing the object to set its own state, so if two consecutive changes to the same object would lead to inconsistent state, the object will catch that with its internal validation, instead of relying on the caller to have validated the change.

Domain Commands

Domain commands are how you tell an aggregate to update itself. In the code above, CheckoutBook and ReturnBook are domain commands. It isn’t strictly necessary to create a command type to represent the data being passed; you could have just passed a Guid bookId instead of a command class into the method. However, I like creating a command type because you have a single object to run validation against, and you can validate parameters when creating the command instance. For example, if your domain command requires a certain value be provided, you could validate that it’s not null in the type constructor instead of in the domain command itself. The validation on the type especially helps the logic flow well; you can’t really validate a Guid without additional context; you can validate a ReturnBookCommand type that contains a Guid, and you already have the additional context around what the Guid is.

public class CheckoutBookCommand
{
    public Guid BookId { get; }
    public Guid UserId { get; }

    public CheckoutBookCommand(Guid userId, Guid bookId)
    {
        if (bookId == Guid.Empty) { throw new ArgumentException($"Argument {nameof(bookId)} cannot be an empty guid", nameof(bookId)); }
        if (userId == Guid.Empty) { throw new ArgumentException($"Argument {nameof(userId)} cannot be an empty guid", nameof(userId)); }

        BookId = bookId;
        UserId = userId;
    }
}

Validation

You probably noticed the comments I had in the domain command implementations about validation. Validation is often tricky to get right in DDD because it uses other dependencies, such as a DB. For example, to successfully check out a book, the system has to make sure both the book and user are in the system, that the book is available, that the user is in good standing, etc. To do these, we already pulled the user from the DB to get the user aggregate, so we know the user is in the system. However, we haven’t checked that the book is in the system, so we need to reference a database instance when we do our validation inside the domain command. We can’t inject a DB instance into the aggregate because we don’t resolve aggregates from the IoC container, and even if we could, it’s not the aggregate’s responsibility to connect to the DB. We could new a DB instance up in the command, but that is wrong for reasons outside the scope of this article, in addition to not being the aggregate’s responsibility to talk to the DB (research Dependency Injection and Inversion of Control if you don’t know why). This is where our command system comes into play. Notice the DomainEvents.Raise call. I have that implemented with MediatR, which is a .NET implementation of the mediator pattern; see the link at the end of this article for more detail:

public static class DomainEvents
{
    public static Func<IPublisher> Publisher { get; set; }
    public static async Task Raise<T>(T args) where T : INotification
    {
        var mediator = Publisher.Invoke();
        await mediator.Publish<T>(args);
    }
}

We register IPublisher and our notifications and commands with our IoC container so we can resolve dependencies in our handlers. We then create a method that knows how to resolve an IPublisher instance and assign it to the static Publisher property in our startup. The static Raise method then has all the information it needs to raise the event and wait for the handlers to complete. In this example, I use the FluentValidation library for validation within these handlers. We could put an error handler in our HTTP response pipeline to catch ValidationExceptions and translate them into 400 responses.

public class CheckingOutBook : INotification
{
    public CheckoutBookCommand Command { get; }

    public CheckingOutBook(CheckoutBookCommand command) => Command = command;
}

public class CheckingOutBookValidationHandler : INotificationHandler<CheckingOutBook>
{
    private readonly CheckingOutBookValidator validator;

    public CheckingOutBookValidationHandler(CheckingOutBookValidator validator) => this.validator = validator;

    public Task Handle(CheckingOutBook @event, CancellationToken cancellationToken)
    {
        validator.ValidateAndThrow(@event.Command);

        return Task.CompletedTask;
    }
}

public class CheckingOutBookValidator : AbstractValidator<CheckoutBookCommand>
{
    public CheckingOutBookValidator(ILibraryRepository repository)
    {
        RuleFor(x => x.UserId)
            .MustAsync(async (userId, _) =>
            {
                var user = await repository.GetUserAsync(userId);
                return user?.IsInGoodStanding == true;
            }).WithMessage("User is not in good standing");

        RuleFor(x => x.BookId)
            .MustAsync(async (bookId, _) => await repository.GetBookAsync(bookId) is not null)
            .WithMessage("Book does not exist")
            .DependentRules(() =>
            {
                RuleFor(x => x.BookId)
                    .MustAsync(async (bookId, _) => !await repository.IsBookCheckedOut(bookId))
                    .WithMessage("Book is already checked out");
            });
    }
}

Creating Entities

At this point you may be wondering how we ensure an aggregate root is valid on initial creation since we can’t await results in a constructor the way we do in our command handlers inside the entity. This is a prime case for the use of factories; we’ll make our constructor internal to reduce the accessibility as much as possible and create a factory that makes any infrastructure calls it needs, calls the constructor, then raises an event with the newly created entity as data that can be used to validate it. This way, we encapsulate all the logic needed to create an event, instead of relying on each place an event is created to perform the logic correctly and ensure the entity is valid.

public class UserFactory
{
    public async Task<User> CreateUserAsync(string name)
    {
        var user = new User(name);
        await DomainEvents.Raise(new CreatingUser(user));

        return user;
    }
}

Domain Services

You are probably wondering at this point why I didn’t simply use a service to perform the checkout book command. For example, I could define the service with a method CheckoutBook(User user, Guid bookId), and perform all the validation inline, instead of importing MediatR and FluentValidation and creating 3 classes to simply validate my user. Then I would inject this service into whatever place calls the domain command and call the service instead of calling the domain command. I could still have my domain command be responsible for updating the entity instance to ensure it isn’t having random values assigned in places. The problem with this is I now have some logic in the service and some in my entity; how do I determine which logic goes where? When multiple devs are working on a project, this becomes very difficult to handle, and people have to figure out where existing logic is and where to put new logic. This issue often leads to duplicated logic, which leads to bugs when one is updated and the other isn’t, among other issues. Additionally, as I mentioned above, because the validation logic occurs outside my entity, I can no longer trust that the entity is in a valid state because I don’t know if the validation was run before the command to update the entity was called. Because DDD implemented correctly only allows the entity to update itself, we can validate data changes once inside the entity just before we update it, instead of hoping the caller remembered to fully validate the changes.

References

AST’s and JavaScript: Write Code that Writes Code

The adoption of abstract syntax trees in JavaScript has led to an explosion in tooling that has changed the landscape for developers.  The usage of ASTs allows JavaScript developers to better identify potential bugs before executing their code, as well as ensuring a consistent code quality across a codebase.  Their use in build tools has changed how we write our JavaScript applications by not only letting us target future specifications of the JavaScript language but also to compile non-spec compliant syntax (e.g. JSX) to JavaScript for use in the browser.  And by leveraging this data structure, developers can have 100% confidence in updating thousands of JavaScript files using an AST-based script by leveraging codemods.

While these tools carry a ton of power and potential, the use of code modification via AST manipulation scares off many developers.  The name itself will cause your average developer to scratch his head and proclaim, “Well, I’m not a Computer Science major so I don’t think a tool like that is for me.” However, given the current tooling available, AST manipulation is very approachable. [Read more…]

Start Training on a New System by Describing the End Product

As a contractor, I am often thrown into huge code bases with just a list of instructions for environment setup and not much else.

The bummer is that for complex software systems, a list of setup instructions is of hardly any use without understanding the expected end product of the environment you are putting together.

I often wish I also had a list of things I was allowed to do as a new contractor / employee, but this is probably a discussion for another time.

When I buy a Lego set, I bought it based on what the final constructed product looked like on the front of the box.

As you are putting together the Lego set, don’t you sometimes find yourself referring to the cool picture on the front of the box as a general reference and motivator?

[Read more…]

Be Rational—Delete That Code!

computer-code-black-cup

In a previous blog post, I brought up the concept of fighting the urge to rewrite code, and instead shift to refactoring code a step at a time until it takes on the features and characteristics that the business needs. Rewriting the code should only be considered a last ditch option. All that said, you will inevitably hit scenarios where you can delete whole swaths of your hard fought code:

  • Refactoring – As you are refactoring code, you will inevitably hit the bottom of the refactoring stack and are now are able to delete whole chunks of code that are no longer needed.
  • Dead End – While coding up a feature, you have found out that the track you have taken is a dead end. There is a need to wipe out all that code and start over on a different path.
  • Fully Supported Third-Party Replacement – Whole chunks of your past work should be replaced by new frameworks and/or third party components.
  • Long Gone Business Requirements – A feature that you poured years into is just no longer needed, and the expense of supporting it is adding up, so it needs to be removed. Nobody learned this lesson more than the Microsoft Internet Explorer team when they started work on the Edge browser (nee Project Spartan)

[Read more…]

You Don’t Need to Rewrite to Move Forward

The story you are about to read is mostly true. The names have been completely omitted to protect the innocent. It is extracted from the caffeine addled brain of an old-school software engineer. Take from it what lessons you wish.

As software engineers we always seem to want to rewrite something. There is an immense sense of freedom that comes from clicking: File –> New Project…

The next time you have the rewrite urge, think instead on how you can execute an incremental improvement to the code without rewriting. I would submit that if you think that a rewrite would be just ‘one step’ and ‘super easy’, it really isn’t. Factors, actors, capital, and time on a scale you never dreamed of will enter into your rewrite effort eventually.

[Read more…]

Browser-less Unit Testing your React/Redux Code with Mocha, Chai, and Enzyme

A lot of JavaScript applications out there require having a browser available to run your unit tests.  For years it seemed like the de-facto configuration for unit testing was some combination of Karma, PhantomJS, and either QUnit or Jasmine.  While I think that there is definitely value in making sure that your application runs properly in a browser, given that that is how your users will interact with it, my personal opinion is that the majority of your test suite should be able to be run outside of a browser environment.  This article will describe what Enzyme brings to the table in terms of unit testing your React/Redux code without needing a browser.

[Read more…]

Vue.js – The Next Library for Angular 1 Developers

Angular is the most successful JavaScript framework ever. I cannot back this up with any numbers, but based on my experience as a developer over the past few years, it is everywhere. It is truly a complete framework, and it’s no wonder why it has achieved so much success in the industry. However, like all technology, it is quickly becoming dated, and new options have entered the fold.

Libraries like React and Angular 2+ have learned from their predecessors and employ strategies and optimizations that result in less code bloat and better performance.  These new-age frameworks also leverage bleeding edge development tools such as Webpack and Babel, which allow developers to utilize future standards (and non-standards) of the JavaScript language, resulting in increased productivity and cleaner code.

[Read more…]

Practical Get Started With Swift

Recently, I had to get started (fast) with Swift on a series of existing iOS projects. I would classify myself as being able to get to fourth gear in a Honda Accord with a manual transmission when it comes to using and applying Swift. I look forward to the day when my Swift knowledge is equivalent to running full out on the Nurburgring in a Bugati Veyron.

My background is C, C++, C#, Objective-C, with a side of JavaScript along the way.

I will never admit to any classic Visual Basic, or VB.NET, code ever being typed by my fingers, ever!

When it comes to learning a new language, I find myself in ‘Use it or lose it’ mode. Learning Swift comes very naturally if you have a concrete task which allows you to add a feature, or refactor an existing iOS project, with brand new Swift code.

I found that becoming proficient with Swift was very easy given the general C language progression in my career. In very short order, I was able to do almost all the tasks in Swift that I would normally do in Objective-C against the CocoaTouch / UIKit API surface. More advanced Swift language constructs exist, which I have barely touched, and hope to be able to learn and apply some day—I am looking at you, protocol extensions and generics.

[Read more…]