#178: Phoenix LiveView Tutorial Part 2
Now that we have our new Elixir Phoenix application installed and our layout updated, let’s start modeling the data we’ll need. A game of Wordle uses a 5-letter word for the solve. And when people play the game they get 6 chances to guess what the 5 letter word is. Now we’ll need two different tables. One table we’ll need to populate with all the possible 5-letter words, which we’ll use to ensure our player’s guesses are valid. The second we’ll populate with a bunch of different 5-letter words that will act as the “solve” for a game. Let’s start by creating a table with all the possible 5-letter words.
We’ll go to the command line and because we’re building a Phoenix LiveView application let’s run mix phx.gen.live to generate a set of LiveView components with a context module “WordBank” and a schema module “Word” with the plural “worlds for the database table name. We only need our table to have one column “name”, which we’ll want to be a string.
$ mix phx.gen.live WordBank Word words name:string
* creating lib/werdle_web/live/word_live/show.ex
* creating lib/werdle_web/live/word_live/index.ex
* creating lib/werdle_web/live/word_live/form_component.ex
* creating lib/werdle_web/live/word_live/index.html.heex
* creating lib/werdle_web/live/word_live/show.html.heex
* creating test/werdle_web/live/word_live_test.exs
* creating lib/werdle/word_bank/word.ex
* creating priv/repo/migrations/{timestamp}_create_words.exs
* creating lib/werdle/word_bank.ex
* injecting lib/werdle/word_bank.ex
* creating test/werdle/word_bank_test.exs
* injecting test/werdle/word_bank_test.exs
* creating test/support/fixtures/word_bank_fixtures.ex
* injecting test/support/fixtures/word_bank_fixtures.ex
Add the live routes to your browser scope in lib/werdle_web/router.ex:
live "/words", WordLive.Index, :index
live "/words/new", WordLive.Index, :new
live "/words/:id/edit", WordLive.Index, :edit
live "/words/:id", WordLive.Show, :show
live "/words/:id/show/edit", WordLive.Show, :edit
...
Remember to update your repository by running migrations:
$ mix ecto.migrate
It generates some files for us, including the live view and the database migration for our “words” table. It also lists some live routes for us to include in our router.ex. For our application, we really only need one route and live view to play. So let’s use word_live/index.ex. Then we can go ahead and remove some of the other files that were generated. We’ll remove show.ex, form_component.ex, and the show.html.heex template.
With those deleted let’s add the WordLive.Index, :index route. We’ll copy it. Then open our router.ex module, and add it to our routes. Instead of having this be at /index let’s have this be /.
lib/werdle_web/router.ex...
scope "/", WerdleWeb do
pipe_through :browser
live "/", WordLive.Index, :index
end
...
With that let’s go back to the command line and run the migration.
$ mix ecto.migrate
...
Now that we have our “words” database table, we need to seed it with a bunch of 5-letter words. To get a list of words, we’ll use the word_list package, which provides a stream of English words.
Let’s go to Hex and grab the word_list config. Then let’s open our Mixfile and add it to our list of dependencies.
mix.exs...
defp deps do
...
{:word_list, "~> 0.1.0"},
...
end
...
Then let’s go to the command line and run mix deps.get to download it.
$ mix deps.get
...
New:
word_list 0.1.0
* Getting word_list (Hex package)
Now that we have a way to get words, let’s create a way to populate our database with them. We’ll open our word.ex schema module. And we’ll want all our words to be lowercase, so let’s create a private function called downcase_name that takes a changeset. Then let’s use a case statement with Ecto Changeset’s get_change function to get any change for the “name” field.
If it returns nil we’ll return the changeset. And if there is a change, we’ll call put_change passing in our changeset, the same field :name and a lowercase name, which we can get with String.downcase. Then let’s call our downcase_name function from our changeset function.
Now let’s add one more check to our changeset - to ensure a word contains only lowercase letters. We’ll create another private function and let’s call this one validate_characters that takes the changeset. And let’s get a regex that matches against lowercase letters. Then we can call Ecto.Changeset’s validate_format function with our regex to ensure a change has the given format. With that, we’ll need to update our changeset function to call it.
lib/werdle/word_bank/word.exdefmodule Werdle.WordBank.Word do
use Ecto.Schema
import Ecto.Changeset
schema "words" do
field :name, :string
timestamps(type: :utc_datetime)
end
@doc false
def changeset(word, attrs) do
word
|> cast(attrs, [:name])
|> downcase_name()
|> validate_required([:name])
|> validate_characters()
end
defp downcase_name(changeset) do
case get_change(changeset, :name) do
nil ->
changeset
name ->
put_change(changeset, :name, String.downcase(name))
end
end
defp validate_characters(changeset) do
lowercase_regex = ~r/\A[a-z]+\z/
validate_format(
changeset,
:name,
lowercase_regex,
message: "must contain only lowercase letters"
)
end
end
Now let’s open seeds.exs and let’s add a script to populate our database with the worlds we’ll need to play our game. I’ll remove some of the comments, so we have more screen space, but we can see that to run this script we need to call mix run priv/repo/seeds.exs from the command line.
Let’s create a script that will batch-insert 5-character words into our database. To start, let’s first add aliases for our Repo and Word modules so we can call them without the prefix. Then let’s call WordList.getStream! to fetch a stream of words from the WordList module. Now we only want to use words with five letters so let’s pipe our stream into Enum.filter to filter out words that are not exactly 5 characters.
Once we have our word, let’s convert each word into a map with the :name key to match our Word schema. Great, now that we have the format for our words let’s put them into batches with Enum.chunk_every passing in 1000. This will split our stream into batches of 1000 words each. Then let’s pipe that into Enum.each and we’ll take each batch of words, and insert them into the database with Repo.insert_all passing in our Word schema module and the word batch. Great, our script is set to process a list of words, filter them to only include those with a length of 5 characters, format, and then batch-insert them into the database.
priv/repo/seed.exs...
alias Werdle.Repo
alias Werdle.WordBank.Word
WordList.getStream!()
|> Enum.filter(fn word ->
String.length(word) == 5
end)
|> Enum.map(fn word ->
%{
name: word
}
end)
|> Enum.chunk_every(1000)
|> Enum.each(fn word_batch ->
Repo.insert_all(Word, word_batch)
end)
Now to run our seeds.exs let’s go to the command line and run our seedfile with “mix run” and then the path to our file. But, when we run it, we get an error - we don’t have any value for our inserted_at field.
$ mix run priv/repo/seeds.exs
...
** (Postgrex.Error) ERROR 23502 (not_null_violation) null value in column "inserted_at" of relation "words" violates not-null constraint
If we open Ecto’s documentation for the insert_all function we see some notes about autogenerated values.
If the schema primary key has type
:idor:binary_id, it will be handled either at the adapter or the storage layer. However, any other primary key type or autogenerated value, likeEcto.UUIDand timestamps, won’t be autogenerated when usinginsert_all/3. You must set those fields explicitly.
Alright, what we need to do is set the inserted_at and updated_at values in the fields for our Word. Let’s go back to seeds.exs and for the timestamp let’s get the current UTC time truncated to the nearest second. We’ll use this for the inserted_at and updated_at timestamps.
priv/repo/seed.exs...
date_time =
DateTime.utc_now() |> DateTime.truncate(:second)
...
|> Enum.map(fn word ->
%{
name: word,
inserted_at: date_time,
updated_at: date_time
}
end)
...
With that let’s go back to the command line and re-run our seeds script. Great, it looks like everything worked.
$ mix run priv/repo/seeds.exs
...
To confirm everything looks good, I’ll go ahead and open the database in Postico, which is a graphical user interface application used to manage a PostgreSQL database on Mac. You don’t need to have this installed, but I’ll use it here to confirm that the “words” database was populated. And great - our “words” table has been populated with over 12,000 5-letter words.
Now that we have a database of valid words to use, let’s create the other table we need. This table will store the valid “solves” for the game. These will be the word that players are trying to guess when they start a game. Alright let’s go back to the command line and this time let’s use the mix phx.gen.context generator with the same WordBank context module, we’ll call our new schema Solve with the table name solves and this will also have a single field name.
When using an existing context module Phoenix will ask you to confirm that’s what you want to do. We’ll confirm that the same context works for us. And great, this generated our solve.ex module and migration and then updated the word_bank.ex context module to include functions for the Solve module.
$ mix phx.gen.context WordBank Solve solves name
...
* creating lib/werdle/word_bank/solve.ex
* creating priv/repo/migrations/{timestamp}_create_solves.exs
* injecting lib/werdle/word_bank.ex
* injecting test/werdle/word_bank_test.exs
* injecting test/support/fixtures/word_bank_fixtures.ex
Remember to update your repository by running migrations:
$ mix ecto.migrate
Let’s go ahead and run the migration.
$ mix ecto.migrate
...
With that, we now need to populate the solves table with possible solutions. In the episode notes I’ve included a link to download the solves-data.csv - this is a CSV that contains all the different words we’ll want to use for our potential game solves. Go ahead and download it now and place it in the “priv/repo” directory.
Once that’s done, we’ll need a way to read the words from the CSV so we can insert them into the database. Let’s use the NimbleCSV package to read our CSV. So let’s go to Hex and grab the NimbleCSV package config.
Then let’s open our Mixfile and add it to our list of dependencies.
mix.exs...
defp deps do
...
{:nimble_csv, "~> 1.2"},
...
end
...
With that, we’ll go to the command line and run mix deps.get to download it.
$ mix deps.get
...
New:
nimble_csv 1.2.0
* Getting nimble_csv (Hex package)
Great, now we need to create a way to read our solves and save them. To do that let’s go back to our seeds.exs and we’ll take the path to our CSV file and pipe it into File.stream!() - which will read the file line by line instead of loading the entire file into memory. Then we’ll pipe that into our NimbleCSV Parser. Specifically, let’s use the NimbleCSV.RFC4180 module which will parse our file according to the RFC 4180 standard.
We’ll go back to our script and pipe our stream into NimbleCSV.RFC4180.parse_stream(). This will process the stream and parse each line from CSV format into a list of strings. Then we’ll pipe that Enum.each - and since there’s only one field in our CSV we’ll pattern match on that to get our “solve” and then once we have our solve let’s call WordBank.create_solve - which was added to our WordBank context module when we generated our Solve module - to save the solve for us to use.
Let’s also go ahead and alias WordBank so we can use it without the prefix. And because we’ve already populated our “words” table, let’s go ahead and comment out the other part of our script. Now when we can run our seeds file it will only populate the “sovles” table.
priv/repo/seeds.exs...
alias Werdle.{Repo, WordBank}
...
"priv/repo/solves-data.csv"
|> File.stream!()
|> NimbleCSV.RFC4180.parse_stream()
|> Enum.each(fn [solve] ->
WordBank.create_solve(%{"name" => solve})
end)
...
Then let’s go back to the command line and run our seed file again.
$ mix run priv/repo/seeds.exs
Great, it looks like everything ran correctly, but let’s open our database app again. And if we look at our “solves” table - great - we have almost 300 different “solves” that will be used in our game. Now let’s make sure our script works to generate both the “words” and “solves” tables at the same time.
We’ll go back to the command line and let’s delete our existing database with mix ecto.drop.
$ mix ecto.drop
Then let’s go back to our seeds file and un-comment our code. We’ll save that and then go back to the command line. We can create our database, run the database migrations, and then run the seed file all with one command: mix ecto.setup.
$ mix ecto.setup
I don’t see any errors, but let’s go back to our database interface, and our “words” table is populated along with our “solves” table. We now have our database populated with words we can use for game “solves” and to validate players’ guesses.