&lt;?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
  <channel>
    <title>nick.recoil.org - Articles</title>
    <link>https://nick.recoil.org/articles.rss.xml</link>
    <description>Everything except notes for nick.recoil.org</description>
    <language>en</language>
    
      <item>
        <title>Efficient Ghost theme development using Docker &amp; livereload</title>
        <link>https://nick.recoil.org/articles/ghost-development-setup/</link>
        <guid>https://nick.recoil.org/articles/ghost-development-setup/</guid>
        <pubDate>Wed, 18 Mar 2026 12:28:23 UTC</pubDate>
        <description>&lt;![CDATA[In this article I go over my development setup for Ghost theme development, making use of livereload for a highly efficient feedback loop while also keeping everything inside Docker containers. I also show how Maildev makes SMTP configuration for Ghost much easier.]]></description>
        <content:encoded>&lt;![CDATA[<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256 0C114.6 0 0 114.6 0 256s114.6 256 256 256s256-114.6 256-256S397.4 0 256 0zM256 128c17.67 0 32 14.33 32 32c0 17.67-14.33 32-32 32S224 177.7 224 160C224 142.3 238.3 128 256 128zM296 384h-80C202.8 384 192 373.3 192 360s10.75-24 24-24h16v-64H224c-13.25 0-24-10.75-24-24S210.8 224 224 224h32c13.25 0 24 10.75 24 24v88h16c13.25 0 24 10.75 24 24S309.3 384 296 384z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">April 2026: A new official tool has been released which gives an alternative method to the one described below. You can now use the <a href="https://github.com/TryGhost/ghst" target="_blank" rel="noreferrer">ghst tool</a> via <code>ghst theme dev ./theme-dir --watch --activate</code>.</span>
</div>

<p>When developing Ghost themes inside Docker, getting a fast feedback loop of edit/reload/view can be tricky. Here’s how I set up live reloading and instant theme updates using <strong>Docker</strong> and <strong>Gulp</strong>, and a bonus use of <strong>Maildev</strong> to make SMTP configuration super simple.</p>
<p>This setup was a lifesaver when I started using AI assistants for tasks outside my usual wheelhouse, like wrestling with complex CSS media queries. By adding a specific <em>Theme Development Workflow</em> section to my <code>agent.md</code>, I enabled Antigravity to debug layouts in a Chrome instance, it understood it could see the changes reflected as soon as it made an update to the file.</p>
<div class="max-w-2xl mx-auto p-4 border-l-4 rounded">
<strong class="text-red-500">Theme Development Workflow:</strong><br/>
When you modify theme files (Handlebars, CSS, or JS),the system is configured to reflect those changes immediately.
You do not need to manually restart the Docker container or refresh the browser; the gulp watch task handles the injection and reload automatically.
</div>
<h2 id="project-structure-overview" class="relative group">Project Structure Overview <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#project-structure-overview" aria-label="Anchor">#</a></span></h2><p>Here&rsquo;s an overview of the project root&rsquo;s structure, in a simplified form. The basic gist is that I have a root directory containing <strong>docker</strong> things, and a root directory for the <strong>theme</strong>.</p>
<div style="background: #000; color: #eee; font-size: 0.7rem; font-family: 'Fira Mono', 'Menlo', 'Consolas', monospace; padding: 1em; border-radius: 6px; white-space: pre;">
├── docker
│&nbsp;&nbsp; ├── docker-compose.yml
│&nbsp;&nbsp; ├── docker-ghost-mysqldb
│&nbsp;&nbsp; ├── docker-ghost-content
│&nbsp;&nbsp; │&nbsp;&nbsp; └── themes
│&nbsp;&nbsp; │&nbsp;&nbsp; <span>&nbsp; &nbsp; </span>└── mytheme-ghost-theme
│&nbsp;&nbsp; └── docker-mysql-data
└── mytheme-ghost-theme
│&nbsp;&nbsp; ├── package.json
│&nbsp;&nbsp; ├── gulpfile.mjs
│&nbsp;&nbsp; ├── home.hbs
...
</div>
<p>This is what the <code>docker-compose.yml</code> file looks like. I&rsquo;m including the full file here because I found various examples of this online, but none of them were exactly right for my needs.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">services</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">ghostdev-web</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">ghost:latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">restart</span><span class="p">:</span><span class="w"> </span><span class="l">unless-stopped</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ports</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="m">2368</span><span class="p">:</span><span class="m">2368</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">environment</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">database__client=mysql</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">database__connection__host=ghostdev-db</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">database__connection__database=ghostdb</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">database__connection__user=ghost</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">database__connection__password=SHUUUSHSECRET</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">mail__transport=SMTP</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">mail__options__host=maildev-test</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">mail__options__port=1025</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">mail__options_secure=false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">url=http://mydevbox.local:2368</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">NODE_ENV=development</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">/Users/nick/ghost-test/docker/docker-ghost-content:/var/lib/ghost/content</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">depends_on</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">ghostdev-db</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">condition</span><span class="p">:</span><span class="w"> </span><span class="l">service_healthy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">ghostdev-db</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">mysql:8.0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">restart</span><span class="p">:</span><span class="w"> </span><span class="l">unless-stopped</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">container_name</span><span class="p">:</span><span class="w"> </span><span class="l">ghostdev-db</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">environment</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MYSQL_ROOT_HOST</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;%&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MYSQL_ROOT_PASSWORD</span><span class="p">:</span><span class="w"> </span><span class="l">ULTRAMEGASECRET</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MYSQL_DATABASE</span><span class="p">:</span><span class="w"> </span><span class="l">ghostdb</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MYSQL_USER</span><span class="p">:</span><span class="w"> </span><span class="l">ghost</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MYSQL_PASSWORD</span><span class="p">:</span><span class="w"> </span><span class="l">SHUUUSHSECRET</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">/Users/nick/ghost-test/docker/docker-ghost-mysqldb:/var/lib/mysql</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">healthcheck</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">test</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;CMD-SHELL&#34;</span><span class="p">,</span><span class="w"> </span><span class="s2">&#34;mysqladmin ping -h 127.0.0.1 -p$$MYSQL_ROOT_PASSWORD || exit 1&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">interval</span><span class="p">:</span><span class="w"> </span><span class="l">10s</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeout</span><span class="p">:</span><span class="w"> </span><span class="l">5s</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">retries</span><span class="p">:</span><span class="w"> </span><span class="m">10</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">start_period</span><span class="p">:</span><span class="w"> </span><span class="l">60s</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">maildev-test</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">maildev/maildev:latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">restart</span><span class="p">:</span><span class="w"> </span><span class="l">unless-stopped</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ports</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="m">1080</span><span class="p">:</span><span class="m">1080</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="m">1025</span><span class="p">:</span><span class="m">1025</span><span class="w">
</span></span></span></code></pre></div><p>The key thing to note here is that the Ghost container has <code>/var/lib/ghost/content</code> mounted in the host filesystem as <code>{$PROJECT_ROOT}/docker/docker-ghost-content/</code>. This is important as it gives us easy direct access to the active theme. If we overwrite the files in the active theme directory, it has an immediate effect on the files served by Ghost inside the Docker container. We don&rsquo;t need to go through the process of uploading another zipfile of the theme via the Ghost admin interface.</p>
<p>The use of <strong>Maildev</strong> is a little bonus. It&rsquo;s not directly related to the livereload technique, but there&rsquo;s a section right at the end talking about why it&rsquo;s useful.</p>
<h2 id="gulp" class="relative group">Gulp <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#gulp" aria-label="Anchor">#</a></span></h2><p>I based my theme on <a href="https://github.com/tryghost/casper" target="_blank" rel="noreferrer">Casper</a>, which is one of the core example themes available on GitHub. It already comes with a Gulp configuration, so here is an outline of how my livereload changes work.</p>
<ol>
<li>I placed the following somwhere towards the bottom of <code>default.hbs</code> in my Ghost theme.</li>
</ol>
<pre tabindex="0"><code>{{LIVERELOAD_SCRIPT}}
</code></pre><p>This is just an injection hook which gulp will search and replace, allowing us to alter the content as it copies the file from its source directory into the Docker container.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl"><span class="kr">const</span> <span class="nx">LIVE_RELOAD_URL</span> <span class="o">=</span> <span class="s1">&#39;http://mydevbox.local:35729/livereload.js&#39;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl"><span class="kr">const</span> <span class="nx">GHOST_THEME_PATH</span> <span class="o">=</span> <span class="nx">path</span><span class="p">.</span><span class="nx">join</span><span class="p">(</span><span class="nx">process</span><span class="p">.</span><span class="nx">cwd</span><span class="p">(),</span> <span class="s1">&#39;../docker/docker-ghost-content/themes/mytheme-ghost-theme&#39;</span><span class="p">);</span>
</span></span></code></pre></div><p>We need to introduce two const declarations. These could be moved to an environment variable to make this concept more portable.</p>
<ol start="2">
<li>The livereload server also needs to be running while gulp is monitoring for changes.</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl">  <span class="kd">function</span> <span class="nx">serve</span><span class="p">(</span><span class="nx">done</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nx">livereload</span><span class="p">.</span><span class="nx">listen</span><span class="p">();</span>
</span></span><span class="line"><span class="cl">    <span class="nx">done</span><span class="p">();</span>
</span></span><span class="line"><span class="cl">  <span class="p">}</span>
</span></span></code></pre></div><ol start="3">
<li>I then added a <code>replace()</code> command inside the handlebars section of the gulpfile.</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl">  <span class="nx">replace</span><span class="p">(</span><span class="s1">&#39;{{LIVERELOAD_SCRIPT}}&#39;</span><span class="p">,</span> <span class="s1">&#39;&lt;script async src=&#34;&#39;</span> <span class="o">+</span> <span class="nx">LIVE_RELOAD_URL</span> <span class="o">+</span> <span class="s1">&#39;&#34;&gt;&lt;/script&gt;&#39;</span><span class="p">),</span>
</span></span></code></pre></div><ol start="4">
<li>And when I build the distribution zipfile for the theme, it removes the injection hook.</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl">  <span class="nx">replace</span><span class="p">(</span><span class="s1">&#39;{{LIVERELOAD_SCRIPT}}&#39;</span><span class="p">,</span> <span class="s1">&#39;&#39;</span><span class="p">),</span>
</span></span></code></pre></div><ol start="5">
<li>The actual process of copying to Docker is given its own method, encapsulating the above <em>replace()</em> functionality:</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl"><span class="kd">function</span> <span class="nx">copyToDockerGhost</span><span class="p">(</span><span class="nx">done</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="c1">// 1) Copy all non-HBS assets as-is (no string replacement on binary/text assets).
</span></span></span><span class="line"><span class="cl">  <span class="nx">pump</span><span class="p">([</span>
</span></span><span class="line"><span class="cl">    <span class="nx">src</span><span class="p">([</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;**&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!node_modules&#39;</span><span class="p">,</span> <span class="s1">&#39;!node_modules/**&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!dist&#39;</span><span class="p">,</span> <span class="s1">&#39;!dist/**&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!yarn-error.log&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!yarn.lock&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!gulpfile.js&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!gulpfile.mjs&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!.git&#39;</span><span class="p">,</span> <span class="s1">&#39;!.git/**&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!*.hbs&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="s1">&#39;!partials/**/*.hbs&#39;</span>
</span></span><span class="line"><span class="cl">    <span class="p">],</span> <span class="p">{</span> <span class="nx">encoding</span><span class="o">:</span> <span class="kc">false</span> <span class="p">}),</span>
</span></span><span class="line"><span class="cl">    <span class="nx">dest</span><span class="p">(</span><span class="nx">GHOST_THEME_PATH</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">  <span class="p">],</span> <span class="kd">function</span> <span class="p">(</span><span class="nx">err</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="k">if</span> <span class="p">(</span><span class="nx">err</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="k">return</span> <span class="nx">handleError</span><span class="p">(</span><span class="nx">done</span><span class="p">)(</span><span class="nx">err</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c1">// 2) Copy HBS templates with live-reload script replacement.
</span></span></span><span class="line"><span class="cl">    <span class="nx">pump</span><span class="p">([</span>
</span></span><span class="line"><span class="cl">      <span class="nx">src</span><span class="p">([</span><span class="s1">&#39;*.hbs&#39;</span><span class="p">,</span> <span class="s1">&#39;partials/**/*.hbs&#39;</span><span class="p">],</span> <span class="p">{</span> <span class="nx">base</span><span class="o">:</span> <span class="s1">&#39;.&#39;</span> <span class="p">}),</span>
</span></span><span class="line"><span class="cl">      <span class="nx">replace</span><span class="p">(</span><span class="s1">&#39;{{LIVERELOAD_SCRIPT}}&#39;</span><span class="p">,</span> <span class="s1">&#39;&lt;script async src=&#34;&#39;</span> <span class="o">+</span> <span class="nx">LIVE_RELOAD_URL</span> <span class="o">+</span> <span class="s1">&#39;&#34;&gt;&lt;/script&gt;&#39;</span><span class="p">),</span>
</span></span><span class="line"><span class="cl">      <span class="nx">dest</span><span class="p">(</span><span class="nx">GHOST_THEME_PATH</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">    <span class="p">],</span> <span class="nx">handleError</span><span class="p">(</span><span class="nx">done</span><span class="p">));</span>
</span></span><span class="line"><span class="cl">  <span class="p">});</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>Now when I edit any of the handlebars template files, you&rsquo;ll see the following output in a terminal session running <code>gulp</code> in watch mode.</p>
<div style="background: #000; color: #eee; font-size: 0.7rem; font-family: 'Fira Mono', 'Menlo', 'Consolas', monospace; padding: 1em; border-radius: 6px; white-space: pre;">
  <span style="color: #0e71cd;">$</span><span>&nbsp;npm run dev&nbsp;</span>
  <span>&gt; mytheme-ghost-theme&#x40;0.0.1 dev</span>
  <span>&gt; gulp</span>
<p>[<span style="color: #ac21ae;">12:42:50</span><span>] Using gulpfile </span><span style="color: #ac21ae;">&hellip;/Ghost/testing-ghost/mytheme-ghost-theme/gulpfile.mjs</span><span>  </span>
[<span style="color: #ac21ae;">12:42:50</span><span>] Starting '</span><span style="color: #1998c2;">default</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:50</span><span>] Starting '</span><span style="color: #1998c2;">css</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:50</span><span>] Finished '</span><span style="color: #1998c2;">css</span><span>' after </span><span style="color: #ac21ae;">192 ms</span><span>  </span>
[<span style="color: #ac21ae;">12:42:50</span><span>] Starting '</span><span style="color: #1998c2;">js</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Finished '</span><span style="color: #1998c2;">js</span><span>' after </span><span style="color: #ac21ae;">360 ms</span><span> </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Starting '</span><span style="color: #1998c2;">copyToDockerGhost</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Finished '</span><span style="color: #1998c2;">copyToDockerGhost</span><span>' after </span><span style="color: #ac21ae;">153 ms</span><span>  </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Starting '</span><span style="color: #1998c2;">serve</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Finished '</span><span style="color: #1998c2;">serve</span><span>' after </span><span style="color: #ac21ae;">3.21 ms</span><span> </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Starting '</span><span style="color: #1998c2;">cssWatcher</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Starting '</span><span style="color: #1998c2;">jsWatcher</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:42:51</span><span>] Starting '</span><span style="color: #1998c2;">hbsWatcher</span><span>'&hellip; </span>
<span>&hellip;</span>
[<span style="color: #ac21ae;">12:45:32</span><span>] Starting '</span><span style="color: #1998c2;">hbs</span><span>'&hellip; </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/archive.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/default.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/error-404.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/error.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/home.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/index.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/page.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/post.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/series.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span style="color: #ac21ae;">&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/tag.hbs</span><span> reloaded. </span>
[<span style="color: #535353;">12:45:32</span><span>] </span><span>&hellip;/Ghost/testing-ghost/docker/docker-ghost-content/themes/mytheme-ghost-theme/partials/post-card.hbs</span><span> reloaded. </span>
[<span style="color: #ac21ae;">12:45:32</span><span>] Finished '</span><span style="color: #1998c2;">hbs</span><span>' after </span><span style="color: #ac21ae;">115 ms</span><span>  </span>
[<span style="color: #ac21ae;">12:45:32</span><span>] Starting '</span><span style="color: #1998c2;">copyToDockerGhost</span><span>'&hellip; </span>
[<span style="color: #ac21ae;">12:45:32</span><span>] Finished '</span><span style="color: #1998c2;">copyToDockerGhost</span><span>' after </span><span style="color: #ac21ae;">140 ms</span><span> </span></p>
</div>
<p>When you modify any handlebars, css or js files, you&rsquo;ll see a flurry of log messages, and then your browser will reload any page being served via your Ghost container. Magic! This has been extraordinarily helpful in lowering iteration time for work on the theme.</p>
<h2 id="full-gulpfile-example" class="relative group">Full Gulpfile example <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#full-gulpfile-example" aria-label="Anchor">#</a></span></h2><p>You can find the full gulp configuration file here: <a href="https://gist.github.com/nickludlam/ccd2f5449fe2f7315302170cfdcb29ec" target="_blank" rel="noreferrer">gulpfile.mjs</a>, where there&rsquo;s slightly more configuration to ensure things like <code>css</code> editing also cause a livereload event.</p>
<p>Just running <code>gulp</code> leaves you in development mode, where it watches for file changes, and livereload is active. Running <code>gulp zip</code> will build the theme zipfile into the <code>dist/</code> directory.</p>
<h2 id="limitations" class="relative group">Limitations <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#limitations" aria-label="Anchor">#</a></span></h2><p>There are limitations. If you introduce a new file, you must bounce the server in order to get it to be seen correctly. Theme files seem to be read once at launch and then not again, unless you&rsquo;re uploading a new version via the admin interface. You can simply run:</p>
<p><code>docker compose restart ghostdev-web</code></p>
<p>Permissions can also be a problem. Generally speaking the ghost content will be owned by the root user, so you might need to play around with file permissions in order to be able to overwrite the data in <code>docker/docker-ghost-content/themes/</code>.</p>
<h2 id="maildev" class="relative group">Maildev <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#maildev" aria-label="Anchor">#</a></span></h2><p>The last thing to mention is using <a href="https://github.com/maildev/maildev" target="_blank" rel="noreferrer">Maildev</a>. It&rsquo;s an SMTP server which also presents a <strong>webmail interface</strong> for anything sent through it. This means that you can easily view emails that Ghost sends, like a sign in link. All you need to do is go to port 1080 on your local machine, and you&rsquo;ll see this interface.</p>
<p>This saves you the bother of needing a fully working SMTP user for your development environment, which would send live emails over the internet. Now emails don&rsquo;t need to leave the container pool of your project.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/maildev_example.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Maildev in action</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
]]></content:encoded>
      </item>
    
      <item>
        <title>Simulating falling autumn leaves in Blender</title>
        <link>https://nick.recoil.org/articles/blender-falling-leaves-simulation/</link>
        <guid>https://nick.recoil.org/articles/blender-falling-leaves-simulation/</guid>
        <pubDate>Mon, 26 Jan 2026 16:36:01 UTC</pubDate>
        <description>&lt;![CDATA[In this article, I demonstrate how to create a realistic falling leaves simulation in Blender using geometry nodes. I talk about my process for randomisation, fine-tuning gravity, wind, and turbulence to create a perfect looping animation.]]></description>
        <content:encoded>&lt;![CDATA[

<p>In this article I&rsquo;ll be explaining how to create a small looping animation of falling leaves in Blender using geometry nodes instead of the traditional particle system approach. We&rsquo;ll break down the simulation into sections, and explain each part in detail. This is written as a tutorial, so you&rsquo;ll get the most out of it if you know the basics of Blender already.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="videos/leaves_cropped.mp4" type="video/mp4" />
      <source src="videos/leaves_cropped.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">The finished animation</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>This guide is designed to help bridge the gap between simple geometry instancing and more complex physics simulations. We are going to ditch the traditional Particle System functionality and build our own falling leaf engine completely inside Geometry Nodes.</p>
<p>While that might sound intimidating, the setup here is focused on aesthetics rather than accuracy. We’ll build a &ldquo;good enough&rdquo; physics model that looks great and loops perfectly. If you have a basic grasp of Blender nodes and want to see an example of the Simulation Zone in action, this project is the perfect playground.</p>
<p>The packed Blender file, includes all textures is available to download and use.</p>
<div class="download-file not-prose">
  <a class="download-file-link" href="blender/falling_leaf_simulation.blend">
    <img class="download-file-icon" src="images/Blender_logo_no_text.svg" alt="Blender logo" />
    <span class="download-file-filename">falling_leaf_simulation.blend</span>
  </a>
</div>
<hr>
<h1 id="the-inspiration" class="relative group">The inspiration <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-inspiration" aria-label="Anchor">#</a></span></h1><p>In autumn last year I was chatting with my partner about how spectacular the colour of the trees and falling leaves were in our local park. It got me thinking about making a small animation in Blender using geometry nodes. Currently, state-of-the-art AI video generation in this space is weak. Making the independent, chaotic motion of leaves believable is difficult.</p>
<p>I also wanted to push my understanding of Blender&rsquo;s increasingly sophisticated geometry nodes functionality, and more broadly improve my ability to develop within node-based environments. In the past I&rsquo;ve been hostile to node-based programming, thinking it was a poor substitute for writing code directly in an IDE. However as time has gone on, and I&rsquo;ve experienced working within teams where changes with time and people cause programs to atrophy, I&rsquo;ve come to appreciate the benefits. Node-based programming can provide a great API to maintain backward compatibility while allowing feature development, bug fixes and general improvements.</p>
<p>Implementing a falling leaves simulation is something you&rsquo;ve been able to do in Blender for a long time. You&rsquo;d stick down a particle system, get it to instantiate the leaves with some randomisation, and then apply some wind or turbulence in order to get them to fall in a suitably realistic way. This has worked well since circa 2008, but we&rsquo;re now at a point where we can recreate all the functionality we need ourselves using geometry nodes, and extend it in ways which would have been impossible with the particle system.</p>
<p>Specifically for our purposes, geometry nodes received an update in <a href="https://developer.blender.org/docs/release_notes/3.6/nodes_physics/" target="_blank" rel="noreferrer">Blender 3.6</a> which allows you to run simulation loops during your animation. This is the key feature which unlocks our ability to use geometry nodes for this project.</p>
<p>It is however a double-edged sword. You have all the freedom you could possibly want, but you need to start from first principles with a blank canvas, and build everything yourself. This can be intimidating, and the examples you can find online don&rsquo;t often deal with simulations; they tend to be more concerned with procedural geometry. This ended up being the challenge I set myself. Could I learn enough to implement a complete animation?</p>
<hr>
<h1 id="source-textures" class="relative group">Source textures <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#source-textures" aria-label="Anchor">#</a></span></h1><p>I looked for photographs of autumn leaf scenes, and found <a href="https://www.freepik.com/premium-photo/fall-leaves-park_32213313.htm" target="_blank" rel="noreferrer">this one on Freepik</a>. It suits our needs by being geometrically simple, has nice even lighting, and a very chaotic bed of leaves on the floor. The dynamic falling leaves could easily blend in once they land.</p>
<p>For the leaves, I collected a handful from my local park and photographed them outside on a plain white piece of paper. It looks like the leaves in the background photograph are larger Maple leaves, but sadly I don&rsquo;t live near any Maple trees. You work with what you have!</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/photographed_leaves.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A selection of the yellow leaves I collected</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Bringing these textures into Blender and mapping them onto some planes, they looked good, but presented a problem. They were too flat.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/leaves_in_blender.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The leaf textures brought into Blender</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<hr>
<h1 id="crumple-the-leaves" class="relative group">Crumple the leaves <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#crumple-the-leaves" aria-label="Anchor">#</a></span></h1><p>If you animate perfectly flat planes, it doesn&rsquo;t look very realistic. As they rotate to be side on to the camera, they disappear. To prevent this, we need to add some geometric distortion to simulate the drying and crumpling of real autumn leaves.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/crumple_leaves_geo_nodes.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The geometry nodes to distort the mesh</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>We can create a simple geometry node network which subdivides the plane, and then uses a 



<span class="highlighted-text-orange">Noise Texture</span>
 node to push the normals of the mesh around a bit. Set the scale, detail and roughness of the noise settings to get a nice random look to the leaves.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/crumpled_leaf_geom.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The distorted mesh</figcaption>
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/crumped_leaf_textured.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The textured mesh</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/leaves_profile.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The leaves now have a better side profile</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<hr>
<h1 id="the-scene" class="relative group">The scene <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-scene" aria-label="Anchor">#</a></span></h1><p>The scene setup is simple. There&rsquo;s not enough information in the image to make solving the camera perspective easy, but given just how simple the scene is, we can eyeball it.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/3d_scene.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The 3D scene</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>There&rsquo;s a camera, a thin emitter plane at the top, and a collider plane at the bottom. The emitter plane is tilted slightly to keep it out of the camera frustum, and the collider plane is flat, and approximates the ground position. There&rsquo;s no specific light source, we use an environment texture to give us approximate lighting conditions. This suits the overcast day in the photograph.</p>
<p>I&rsquo;ve also placed the collection of leaf objects out of view of the camera so that I can work on the geometry and materials easily.</p>
<p>The camera setup is the only difficult element. The source photograph luckily has EXIF tags, including the lens used. Unfortunately the lens is a variable zoom, and has a range of 24mm to 70mm. This means we need to eyeball the camera settings to get a similar perspective. Again, luckily for us the simplicity of the scene makes this more forgiving.</p>
<hr>
<h1 id="the-emitter-geometry-nodes" class="relative group">The emitter geometry nodes <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-emitter-geometry-nodes" aria-label="Anchor">#</a></span></h1><p>Now we come to the heart of our falling leaf simulation: the geometry nodes for the emitter. Let&rsquo;s break down this fairly intimidating node network into sections.</p>
  
<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <a href="#" class="js-image-modal" data-large="images/falling_leaves_gn_network_compressed.png">
        <img class=" rounded-md modalimage-thumb" src="images/falling_leaves_gn_network_thumb.jpg"
          alt="The entire geometry node simulation network" style="margin: 0" />
      </a>
      
      <figcaption class="text-center">The entire geometry node simulation network. You can click to view the full-size image.</figcaption>
      
    </picture>
  </figure>
</div>

<p>The key to this setup is the <strong>Simulation Zone</strong>. Unlike standard geometry nodes which evaluate from scratch every frame, a Simulation Zone allows us to carry over data from the previous frame. By combining this with Named Attributes to store values like velocity and rotation, we can update the state of our leaves iteratively to create a physics simulation.</p>
<p>A high level overview of the steps taken per-frame are:</p>
<ul>
<li>Randomly instantiate additional leaves</li>
<li>Update the velocity from the fixed acceleration</li>
<li>Mix in a simulated wind component to the velocity, driven by a noise texture</li>
<li>Update the instance position from the velocity</li>
<li>Update the instance rotation from the rotational velocity</li>
<li>Calculate the collision with a floor plane, and exponentially damp the acceleration and velocity components</li>
<li>Set a lifespan fade value to use in the shader</li>
<li>Remove leaf instances at the end of the lifespan</li>
</ul>
<h2 id="spawning-leaves" class="relative group">Spawning leaves <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#spawning-leaves" aria-label="Anchor">#</a></span></h2>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/rate_limit_leaf_spawn.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Rate-limit the leaf spawns</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The first thing we need to do is control how many leaves we spawn. One of the first surprising things I encountered was that when using the Distribute Points on Faces node, we need to control the spawn count by the <em>density</em> of the points, not the number of points. This means that scaling the spawning zone geometry will change the number of leaves spawned per second.</p>
<p>The most simple way I found to control this in a way which is compatible with our requirement to have a looping animation is the following:</p>
<ol>
<li>Take the frame number and 



<span class="highlighted-text-blue">Modulo</span>
 it with your looping frame count</li>
<li>Use that integer as a seed for a 



<span class="highlighted-text-blue">Random Value</span>
</li>
<li>Set a threshold, and if the random number is 



<span class="highlighted-text-blue">Less Than</span>
 a chosen value, use the result output as the Selection input to 



<span class="highlighted-text-green">Distribute Points on Faces</span>
</li>
</ol>
<p>Setting it up this way gives us an upper limit to the emission rate. If we exposed Density directly, it creates opportunities for UI accidents, and a slip of the mouse can change the value from 0.01 to 10, and you&rsquo;re suddenly instantiating 10,000 leaves each frame, tanking the application performance. Setting a reasonable ceiling value, and then giving a percentage control to that maximum is much safer for artistic experimentation.</p>
<h2 id="randomisation" class="relative group">Randomisation <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#randomisation" aria-label="Anchor">#</a></span></h2><p>In order to give us an output which is capable of looping, all random numbers are seeded with the frame number modulo the repeat count. This gives us repeatable behaviour over a set number of frames.</p>
<h2 id="attribute-setup" class="relative group">Attribute setup <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#attribute-setup" aria-label="Anchor">#</a></span></h2>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/initialise_attributes.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Initialise the Named Attributes we will be using</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Now we need to set the initial values for the Named Attributes we&rsquo;ll be using during the simulation. The input values are primarily sourced from the 



<span class="highlighted-text-gray">Group Input</span>
 node so we can easily experiment with different values outside of the GN editor.</p>
<div class="not-prose">
<dl>
<dt>lifespan</dt>
<dd>The duration of our leaves in frames</dd>
<dt>accel</dt>
<dd>The static acceleration vector applied to each leaf. In our case it only experiences a negative Z acceleration due to gravity</dd>
<dt>vel</dt>
<dd>The initial velocity given to each leaf, zero in our case</dd>
<dt>maxAngSpeed</dt>
<dd>The upper limit on how fast the leaves can spin and tumble</dd>
<dt>normAngVel</dt>
<dd>The angular velocity of each leaf, expressed as a Vec3. This is given a random vec3 from -1 to 1. This is multiplied with <code>maxAngSpeed</code> to calculate its rotation speed</dd>
</dl>
</div>
<p>For completeness there is one more attribute we use, but it&rsquo;s calculated and set dynamically later in the network.</p>
<div class="not-prose">
<dl>
<dt>leafFadeOut</dt>
<dd>A 0-1 float which is an expression of inverse leaf alpha. This is used in the leaf shader.</dd>
</dl>
</div>
<h2 id="instantiation" class="relative group">Instantiation <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#instantiation" aria-label="Anchor">#</a></span></h2>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/instance_leaves.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Now we create our leaf instances.</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>We instance our leaves as you might in a standard Geometry Node setup, but with a slight twist. Because we&rsquo;re using a Simulation Zone, we have an existing source of geometry instances inherited from the previous frame. We want to merge these existing leaves in with our <em>new</em> instances, and then feed them into our network so that old and new are treated equally.</p>
<h2 id="cheap-turbulence-simulation" class="relative group">Cheap turbulence simulation <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#cheap-turbulence-simulation" aria-label="Anchor">#</a></span></h2>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/position_based_vel_noise.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Cheap and easy velocity manipulation</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>To provide a simple way to simulate the way leaves fall, we will sample a 



<span class="highlighted-text-darkorange">3D fBM Noise Texture</span>
 and add this to our leaf velocity, acting as a pseudo acceleration force. Before this is added to the leaf velocity, we scale it by an exposed value, allowing us to easily dial this strength up and down according to our needs.</p>
<p>Rather than this being directly added to the <code>vel</code> attribute each frame, we need to take into account when this turbulence needs to be applied, and when it doesn&rsquo;t.</p>
<h2 id="soft-ground-collision" class="relative group">Soft ground collision <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#soft-ground-collision" aria-label="Anchor">#</a></span></h2><p>Our leaves will eventually reach the ground, and must stop moving. Aesthetically it&rsquo;s much more believable if the leaves come to a rest with a slight deceleration, rather than coming to a halt instantaneously. This means we need to create a system of soft collision. The linear acceleration and velocity as well as the rotational velocity should be arrested by their proximity to the ground collider.</p>
<p>It&rsquo;s important to note that we&rsquo;re not using a heavy Rigid Body physics engine. Instead, we are creating a <strong>position-dependent damping field</strong>. Think of it as the air getting <em>thicker</em> the closer the leaf gets to the ground, eventually freezing it in place.</p>
<p>This field will give us a coefficient we can use to multiply the acceleration, velocity and rotation with, giving us an asymptotic decay function. We effectively lerp the system to a halt by using a fractional value.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/subgroup_leaf_in_freefall.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Our nodes to calculate the effect of the damping field</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Above is the group of nodes we will use to calculate the effect of the damping field. Note that we use <strong>Relative</strong> as the transformation applied to the output. We&rsquo;re in the coordinate space of the emitter, so we need to bring the ground collider into that space to get the correct proximity values.</p>
<p>The 



<span class="highlighted-text-green">Geometry Proximity</span>
 node can be set to <strong>Faces</strong> because we&rsquo;re a single flat plane. Lastly we clamp the output of the proximity node to a maximum of 1, giving us our damping field coefficient, ready to multiply our acceleration, velocity and rotation by.</p>
<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256 0C114.6 0 0 114.6 0 256s114.6 256 256 256s256-114.6 256-256S397.4 0 256 0zM256 128c17.67 0 32 14.33 32 32c0 17.67-14.33 32-32 32S224 177.7 224 160C224 142.3 238.3 128 256 128zM296 384h-80C202.8 384 192 373.3 192 360s10.75-24 24-24h16v-64H224c-13.25 0-24-10.75-24-24S210.8 224 224 224h32c13.25 0 24 10.75 24 24v88h16c13.25 0 24 10.75 24 24S309.3 384 296 384z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">For some more flexibility we could divide the proximity by a scalar to give longer range to the collision. For our leaf collision purposes, a proximity of 1 is sufficient because the scale of the leaves is small.</span>
</div>







<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/using_soft_collision.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Employing our soft collider calculation</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>There are two ways we use the soft collider output. One is to directly use the <code>fInFreefall</code> value, which is 1 in freefall, and drops to a value below 1 during collision. We will demonstrate this later. The other is to convert our freefall value into an inverted <strong>collision</strong> boolean, where it has a value of <strong>0</strong> in freefall, and <strong>1</strong> during collision.</p>
<p>We use this collision factor to immediately kill any contribution the position noise field makes to our leaf velocity by use of a scale node. The output of the scale node in the above image is the velocity contribution of the noise field.</p>
<p>In physics terms, we are applying a drag coefficient that increases infinitely as distance approaches zero.</p>
<h2 id="updating-velocity-and-position" class="relative group">Updating velocity and position <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#updating-velocity-and-position" aria-label="Anchor">#</a></span></h2>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/calc_vel_pos.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The power of simulation nodes, being able to update position and velocity</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Now we come to update the leaf&rsquo;s velocity and position. First we grab the <code>accel</code> attribute and multiply it by Delta Time to get the velocity contribution. We then add this to the velocity contribution from the turbulence simulation, and add that change in velocity to the existing <code>vel</code> attribute. The 



<span class="highlighted-text-purple">Scale</span>
 node then scales the overall velocity by the <code>fInFreefall</code> value.</p>
<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256 0C114.6 0 0 114.6 0 256s114.6 256 256 256s256-114.6 256-256S397.4 0 256 0zM256 128c17.67 0 32 14.33 32 32c0 17.67-14.33 32-32 32S224 177.7 224 160C224 142.3 238.3 128 256 128zM296 384h-80C202.8 384 192 373.3 192 360s10.75-24 24-24h16v-64H224c-13.25 0-24-10.75-24-24S210.8 224 224 224h32c13.25 0 24 10.75 24 24v88h16c13.25 0 24 10.75 24 24S309.3 384 296 384z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">To describe what we&rsquo;re doing in mathematical notation, we can define \(d\) as the distance to the collider, then we can define \(\lambda = \text{clamp}(d,0,1)\). This is what we assign to <code>fInFreefall</code>. Then within each iteration we will calculate \(\vec{v}_\text{damped} = \vec{v}_\text{original} \cdot \lambda\). This damping will be applied to the linear and angular velocity, as well as the acceleration.</span>
</div>







<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/set_accel_vel.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Updating the acceleration and velocity attributes</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>In the above image, we can see the <code>vel</code>, <code>accel</code> and <code>normAngVel</code> attributes being updated. The velocity has already been scaled by the <code>fInFreefall</code> value, but the other two still need to be scaled before being updated.</p>
<h2 id="updating-the-rotation" class="relative group">Updating the rotation <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#updating-the-rotation" aria-label="Anchor">#</a></span></h2>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/leaf_rotation.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Updating the rotation of the leaf instances</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>In order to calculate the updated eulers, we need to scale the <code>normAngVel</code> by the <code>maxAngSpeed</code> and multiply it by Delta Time. This gives us the change in rotation. We then convert this to a rotation and use it in the 



<span class="highlighted-text-green">Rotate Instances</span>
 node.</p>
<h2 id="fading-and-deleting-the-leaves" class="relative group">Fading and deleting the leaves <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#fading-and-deleting-the-leaves" aria-label="Anchor">#</a></span></h2><p>The last two things we need to do is fade out the leaves and delete them at the end of their lifespan. Note that while the attribute is named <code>lifespan</code>, it actually stores the <strong>end frame</strong> value—the specific frame number at which the leaf instance should be deleted.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/fade_out_calc.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Calculating the fade out value</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The following pseudo code describes the fade out calculation, where k-prefixed variables are our constants defined for the whole network, and attr is the prefix for named attributes.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-c" data-lang="c"><span class="line"><span class="cl"><span class="n">fadeStartFrame</span> <span class="o">=</span> <span class="n">attrLifespan</span> <span class="o">-</span> <span class="n">kFadeFrameCount</span>
</span></span><span class="line"><span class="cl"><span class="n">framesIntoFade</span> <span class="o">=</span> <span class="n">fadeStartFrame</span> <span class="o">-</span> <span class="n">currentFrameNumber</span>
</span></span><span class="line"><span class="cl"><span class="n">t</span> <span class="o">=</span> <span class="n">framesIntoFade</span> <span class="o">/</span> <span class="n">kFadeFrameCount</span>
</span></span><span class="line"><span class="cl"><span class="k">return</span> <span class="nf">MapRange</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
</span></span></code></pre></div><p>This is stored in the <code>leafFadeOut</code> attribute, starting at 0, and rising to 1 when the leaf has faded out completely. It&rsquo;s effectively an inverse Alpha calculation.  We need to have a default value of 0 when the leaf is visible, which will be important once we come to implement the fade out in our leaf shader.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/delete_leaves.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Deleting the leaves at the end of the lifespan</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The deletion comparison is easy, we just compare the current frame number to our <code>lifespan</code> attribute. If the current frame number is greater than the lifespan attribute, we delete the leaf using the 



<span class="highlighted-text-purple">Delete Geometry</span>
 node targetting the instances.</p>
<hr>
<h1 id="the-leaf-shader" class="relative group">The leaf shader <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-leaf-shader" aria-label="Anchor">#</a></span></h1>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/leaf_shader_calculations.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Using the fade out value to fade out the leaves</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The leaf shader is set up in a simplistic way. The lighting in our scene is very uniform and almost blown out, certainly very saturated. In order to get the rendered output to match against the plate, it&rsquo;s necessary to make the leaves slightly emissive. We also have basic hue, saturation and value controls, as well as a brightness and contrast control in case there needs to be any additional tweaking.</p>
<p>The only other point of note here is the use of the <code>leafFadeOut</code> attribute to control the opacity of the leaves. In order to be able to use this shader when there is no attribute set, we rely on the default value of 0 meaning that the leaf is fully opaque. Because the attribute is the inverse of what we need to use for the alpha attribute of the standard Principled BSDF shader, we need to invert it before using it.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/leaf_shader_with_texture.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The final shader with unique texture per leaf</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Here we can see the same shader is used, we just vary the input texture for each different leaf.</p>
<hr>
<h1 id="compositing" class="relative group">Compositing <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#compositing" aria-label="Anchor">#</a></span></h1>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/compositing.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The final composite of the leaves and the plate</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Lastly we have the compositing setup. We just use some global Hue, Saturation and Value nodes to adjust the colour of the leaves to match the plate, and an Exposure node to adjust the brightness. The leaves are then composited over the plate using the alpha channel.</p>
<h1 id="looping-video" class="relative group">Looping video <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#looping-video" aria-label="Anchor">#</a></span></h1><p>In order to create the looping video, we need to ensure that the output frame count is at least twice the loop count. We can then use ffmpeg to chop out the section we need.</p>
<p><code>ffmpeg -i /path/to/video.mp4 -vf &quot;trim=start_frame=400:end_frame=600,setpts=PTS-STARTPTS&quot; -an output.mp4</code></p>
<p>There may well be a way to achieve this step within Blender itself, but I knew how to achieve this using ffmpeg.</p>
<hr>
<h1 id="conclusion-and-future-work" class="relative group">Conclusion and future work <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#conclusion-and-future-work" aria-label="Anchor">#</a></span></h1><p>I&rsquo;m happy with the final animation. The camera setup isn&rsquo;t perfect, but the aesthetics look good nonetheless. There are a number of areas I&rsquo;d like to explore further.</p>
<ul>
<li>Grouping more nodes together to make the top level graph more manageable, and easier to reuse in a modular fashion</li>
<li>Use the baking capability to make playback quicker when you don&rsquo;t need to change the parameters of the simulation</li>
<li>Avoid the ffmpeg step, and produce a looping video directly from Blender</li>
</ul>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="videos/leaves_cropped_split_view.mp4" type="video/mp4" />
      <source src="videos/leaves_cropped_split_view.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">A split view to show the leaves more easily</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
]]></content:encoded>
      </item>
    
      <item>
        <title>Making maps without getting lost</title>
        <link>https://nick.recoil.org/articles/making-maps-without-getting-lost/</link>
        <guid>https://nick.recoil.org/articles/making-maps-without-getting-lost/</guid>
        <pubDate>Sun, 09 Mar 2025 20:52:01 UTC</pubDate>
        <description>&lt;![CDATA[Creating an interactive tile-based map of a fictional island from a video game]]></description>
        <content:encoded>&lt;![CDATA[<p>This is a story about creating an interactive tile-based map of a fictional island from a video game. It goes through the inception of an idea, self-imposed constraints, and staying focused on delivery. There&rsquo;s enough discovery, intellectual rabbit holes and ruthless pragmatism to turn any casual hobby into an existential crisis.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="videos/map_pan_demo.mp4" type="video/mp4" />
      <source src="videos/map_pan_demo.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Panning across our fictional video game island</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<h1 id="inception" class="relative group">Inception <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#inception" aria-label="Anchor">#</a></span></h1><p>I recently picked up <a href="https://reforger.armaplatform.com" target="_blank" rel="noreferrer">Arma Reforger</a> and I&rsquo;ve been enjoying the chaos of its 120 player online mode. The Arma series of games have always been positioned more as a <strong>Milsim</strong> (military simulation) rather than a classic first-person shooter like Battlefield or Call of Duty. One of the hardest aspects of the game&rsquo;s steep learning curve is <em>map knowledge</em>. New players swiftly discover that there&rsquo;s no ability to see where they are on the vast 13km by 13km map. You&rsquo;re instantly lost, and it&rsquo;s very disorientating.</p>
<p>Over time you build up a visual familiarity with your surroundings, and after a few weeks of playing you start to recognise the frequently used roads and landmark buildings. Alongside this map knowledge is familiarity with the location of <strong>supply caches</strong> which play an important role in the game mechanics. These supplies form the backbone of the in-game economy of both teams. If your team&rsquo;s bases don&rsquo;t have sufficient supplies, you can&rsquo;t purchase vehicles or specialised equipment, you can&rsquo;t extend the bases with new structures, and in dire situations, you won&rsquo;t even be able to respawn at them.</p>
<p>You sometimes stumble on these supply caches yourself, and sometimes other players show you while you&rsquo;re playing, but this knowledge is harder to acquire. The caches can range from large and obvious brown shipping containers to wooden crates stuffed in the attics of some very unassuming houses.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_2e0f8d3c381fe581.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_1f91a8ab4dd2d9f2.webp 660w
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_550748ab89ca2a58.webp 990w
              
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_550748ab89ca2a58.webp 990w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="990"
        height="256"
        class="mx-auto my-0 rounded-md"
        alt="Three different visual representations of supplies in Arma Reforger"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_acdf7d29c13d843f.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_2f961147ac23808.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies_hu_acdf7d29c13d843f.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies.jpg 990w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/what_are_supplies.jpg 990w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">The various visual representations of the supply caches inside the game</figcaption>
</figure>
</p>
<p>After some weeks of playing, it occurred to me that having a map on a second screen as you play would be <em>incredibly</em> useful, so thought it&rsquo;d make a fun side project.</p>
<p>I didn&rsquo;t want to walk around and scout them out manually; they would need to be 100% procedurally created. I&rsquo;d have to derive all the locations of caches, not to mention the game map itself, from within the game&rsquo;s data.</p>
<p>To do so, I&rsquo;d have to work within the Reforger game engine, implement some command-line image processing, and ultimately bring everything together using a browser-based JavaScript map framework.</p>
<p>I wanted this to be a short project, with minimal distractions. If interesting coding or data problems came up during the work, it was important to be pragmatic and <strong>stay focussed on shipping version one</strong>. In this post we&rsquo;ll go through the problems I had to solve along the way, and the approach I took.</p>
<ol>
<li><a href="#deconstruction">Deconstruction</a></li>
<li><a href="#the-enfusion-workbench">The Enfusion Workbench</a></li>
<li><a href="#cropping-map-tiles">Cropping map tiles</a></li>
<li><a href="#implementing-leafletjs">Implementing LeafletJS</a></li>
<li><a href="#zoom-levels">Zoom levels</a></li>
<li><a href="#map-stats">Map stats</a></li>
<li><a href="#extracting-location-data">Extracting location data</a></li>
<li><a href="#the-finished-map">The finished map</a></li>
<li><a href="#some-topographic-fun">Some topographic fun</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ol>
<h1 id="deconstruction" class="relative group">Deconstruction <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#deconstruction" aria-label="Anchor">#</a></span></h1><p>The joy of a project like this is the <em>polydisciplinary</em> nature. No one single aspect of the work is likely to be be particularly difficult. However, each of the stages require working in a different domain, with different languages and different libraries, and all stages must be implemented for the project to deliver on its promise.</p>
<p>Another enticing aspect of a project like this is to create a genuinely useful public tool, and it&rsquo;s always fun to give back to a community you enjoy being a part of.</p>
<p>You begin thinking about a project like this by <em>working backwards</em>. The interface for the tool I want is going to be something like Google Maps. Web browsers can display these really well, and it can work on any size screen. If you want to make a map website you need map tiles to display, and coordinate data to dictate where we place the map pins. To get hold of both these elements I need to start with the game engine.</p>
<p>Arma Reforger&rsquo;s open developer tools are common aspect of all <a href="https://www.bohemia.net" target="_blank" rel="noreferrer">Bohemia Interactive</a>&rsquo;s games. I&rsquo;d done some <a href="https://gist.github.com/nickludlam/9bb8bf4521eaeb16d81c" target="_blank" rel="noreferrer">light modding work for Arma 3</a> over 12 years ago, but the technology has been completely overhauled since then, so I&rsquo;ll effectively be starting from scratch.</p>
<p>Now let&rsquo;s put the steps back in the right order:</p>
<div class="mermaid" align="center">
  
graph LR;
A[Game engine]-- Screenshots -->B[Image processing]
A-- Entity query -->C[Location Data];
B-- Map tiles -->D[Web frontend];
C-- JSON -->D;

</div>

<p>First I&rsquo;ll need to dive into the most unknown part of the project, the game engine.</p>
<h1 id="the-enfusion-workbench" class="relative group">The Enfusion Workbench <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-enfusion-workbench" aria-label="Anchor">#</a></span></h1><p>The <a href="https://enfusionengine.com" target="_blank" rel="noreferrer">Enfusion Engine</a> is a cross-platform engine using <a href="https://www.qt.io/product/qt6" target="_blank" rel="noreferrer">Qt</a> for the interface, implementing a scripting language called <a href="https://community.bistudio.com/wiki/DayZ:Enforce_Script_Syntax" target="_blank" rel="noreferrer">Enforce Script</a>. As far as I know this is entirely their own creation, and not derived from any specific base language. Thankfully it&rsquo;s reasonably C-like, so it&rsquo;s not too difficult to get up to speed with.</p>
<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256 0C114.6 0 0 114.6 0 256s114.6 256 256 256s256-114.6 256-256S397.4 0 256 0zM256 128c17.67 0 32 14.33 32 32c0 17.67-14.33 32-32 32S224 177.7 224 160C224 142.3 238.3 128 256 128zM296 384h-80C202.8 384 192 373.3 192 360s10.75-24 24-24h16v-64H224c-13.25 0-24-10.75-24-24S210.8 224 224 224h32c13.25 0 24 10.75 24 24v88h16c13.25 0 24 10.75 24 24S309.3 384 296 384z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">Enforce Script and the Enfusion game engine are niche topics, so unfortunately it&rsquo;s not possible to get <a href="https://copilot.microsoft.com/chats/ez2es39ixcE72LukFACCe" target="_blank" rel="noreferrer">Copilot</a> or <a href="https://claude.ai" target="_blank" rel="noreferrer">Claude</a> to reliably help you develop this. We&rsquo;re going to be taking the traditional route of reading the documentation and example code. How old school!</span>
</div>

<p>The development tools are accessible via Steam. It&rsquo;s a single application with multiple windowed sub-applications that are tailored to specific tasks like modelling, scripting, audio, particle systems etc. I&rsquo;m focusing on the <em>World Editor</em> and the <em>Script Editor</em> as these are the sub-application which load and render the game worlds, and the main scripting IDE respectively.</p>
<p>The first thing to do is load up one of the game worlds and look to see whether the supply caches are present in the map data, or whether they are instantiated at runtime. If we can see them in the offline map it&rsquo;s going to be a lot easier to work with.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_6ce0da22c63045e7.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_e4ff32825ca437e.webp 660w
            
              ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_9368468ffe955a6b.webp 1024w
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_c1579b79325db776.webp 1280w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="1280"
        height="720"
        class="mx-auto my-0 rounded-md"
        alt="A screenshot of the Enfusion editor"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_e885f98a472ada48.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_e2d0e5a9429d349f.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_e885f98a472ada48.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench_hu_fc26d49ec79153ac.jpg 1024w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supplies_in_workbench.jpg 1280w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">Supply locations as seen in the World Editor</figcaption>
</figure>
</p>
<p>Yep, there they are! Now let&rsquo;s turn our attention to the map. Maybe there&rsquo;s an existing way to export topographical data we can use for our map tiles? Looking around the editor there are various mentions of likely tools like <strong>Map Exporter</strong>, <strong>Export Map data</strong>, <strong>Export Geographic data</strong>. The only one that seems to produce an image is the <strong>Export Map data</strong> tool. It has three modes of operation, and the two of interest are <code>RASTERIZATION</code> and <code>GEOMETRY_2D</code>.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            
              srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/CTI_Campaign_Eden_plain_hu_c84e71f3e47b6710.webp"
            
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="500"
        height="500"
        class="mx-auto my-0 rounded-md"
        alt="The blank base map of Everon exported from the Enfusion editor"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/CTI_Campaign_Eden_plain.jpg"
        
      />
    </picture>
  

<figcaption class="text-center">The exported image from RASTERIZATION mode</figcaption>
</figure>
</p>
<p>Running it in <code>RASTERIZATION</code> mode exports a large 4096 x 4096 px texture which forms a base map used inside the game, and you can see that above. In the game it&rsquo;s also composited with vector information detailing forested areas, the road and path network, and some building data. This vector data is exported to a <code>.topo</code> file, but unfortunately there&rsquo;s no documentation on the format for this file.</p>
<p>Ok, so there are no existing maps which fit the requirements. Time for Plan B; let&rsquo;s write our own map tile exporter!</p>
<p>Looking at the documentation and sample applications it&rsquo;s possible to script the position and orientation of the editor camera fairly easily, as well as capturing output images to disk. Unfortunately there is no control over the perspective matrix, which means there&rsquo;s no way to make it orthographic, and we&rsquo;ll have issues relating to perspective.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-c++" data-lang="c++"><span class="line"><span class="cl"><span class="c1">// Get World Editor module
</span></span></span><span class="line"><span class="cl"><span class="n">WorldEditor</span> <span class="n">worldEditor</span> <span class="o">=</span> <span class="n">Workbench</span><span class="p">.</span><span class="n">GetModule</span><span class="p">(</span><span class="n">WorldEditor</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="c1">// Get World Editor API
</span></span></span><span class="line"><span class="cl"><span class="n">WorldEditorAPI</span> <span class="n">api</span> <span class="o">=</span> <span class="n">worldEditor</span><span class="p">.</span><span class="n">GetApi</span><span class="p">();</span>
</span></span><span class="line"><span class="cl"><span class="c1">// Position the camera 1 km up, looking down
</span></span></span><span class="line"><span class="cl"><span class="n">vector</span> <span class="n">camPos</span> <span class="o">=</span> <span class="n">Vector</span><span class="p">(</span><span class="mi">3000</span><span class="p">,</span> <span class="mi">1000</span><span class="p">,</span> <span class="mi">3000</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="n">vector</span> <span class="n">lookVec</span> <span class="o">=</span> <span class="n">Vector</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">90</span><span class="p">,</span> <span class="mi">0</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="n">api</span><span class="p">.</span><span class="n">SetCamera</span><span class="p">(</span><span class="n">camPos</span><span class="p">,</span> <span class="n">lookVec</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="c1">// Now create the screenshot
</span></span></span><span class="line"><span class="cl"><span class="n">System</span><span class="p">.</span><span class="n">MakeScreenshot</span><span class="p">(</span><span class="s">&#34;test&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="c1">// we now have a &#39;test.png&#39; file
</span></span></span></code></pre></div><p>If we can&rsquo;t eliminate issues with perspective, then we need to minimise them, so let&rsquo;s emulate real-world satellite data. If we position the camera far up, and with a very narrow FOV, we can compress the resulting perspective distortion as much as possible. The balancing act here is having a narrow FOV without fighting against the engine&rsquo;s desire to enforce Level of Detail restrictions on how many and how well every object is rendered. Zoom in too much and you can ruin the visual fidelity.</p>
<p>A vertical FOV of 15 degrees offers a good mix of fidelity vs granularity. It gives us approximately 20 cm per pixel in the resulting image, which is enough to resolve terrain features you&rsquo;ll encounter while  walking or driving around the map.</p>
<p>Control over the camera is very restricted, and it&rsquo;s not possible to control the field of view or far plane distance via scripting. Part of our process will require the user to set up some camera parameters manually, but luckily it can persist across sessions, making it less of a source of potential error.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            
              srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/camera_settings_hu_9639bf248aa2a5fd.webp"
            
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="396"
        height="424"
        class="mx-auto my-0 rounded-md"
        alt="A screenshot of the camera controls in the Enfusion editor"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/camera_settings.png"
        
      />
    </picture>
  

<figcaption class="text-center">Manually setting the camera FOV and far plane is not too difficult</figcaption>
</figure>
</p>
<p>The process to capture the images is straightforward, but consumes a lot of disk space. This is partly due to the difficulty of controlling the size of the camera window within the editor, and therefore the size of the output screenshot. The only consistent and reliable way to control this is to put the camera into full-screen mode with the F11 key, which then makes the screenshots the same size as my monitor resolution, 2560 x 1440 px. It captures a lot more data than we require, but again it&rsquo;s about making this process repeatable in a regime where you can&rsquo;t have the script enforce the capture parameters.</p>
<p>We&rsquo;ll be moving the camera across the island in two nested loops, one for the X axis, and one for Z. We want to traverse across the 13 km of each axis in 100-metre steps, which means 16,900 steps in total. That&rsquo;s a lot, so we need to make the capture process capable of detecting existing screenshots, and skipping over that position. This will save us a lot of needless repetition as we stop and start the process during development.</p>
<p>We can also make this process detect the cropped tile from the screenshot using a <code>_tile.png</code> suffix, meaning that we can delete the original screenshots once we&rsquo;ve cropped them. This will help keep the intermediate disk usage down. Every screenshot is around 6 MB, which means our total intermediate disk usage will be nearly 100 GB. Did I mention I was short on disk space during this project? I became a bit obsessed with storage efficiency!</p>
<p>The last trick of camera movement is to make the camera stay at a fixed height relative to the terrain surface. This ensures that perspective artefacts that make edge discontinuities are minimised when travelling over areas of rapid height change.</p>
<p>Capturing this data takes an hour or two, as pauses need to be added to the capture loop to allow the renderer to stabilise the visual image, and write the screenshot to disk. During testing, if I moved the camera too rapidly I could alter the buffer contents before it was successfully written to disk, or I could take the screenshot during the eye adaptation changes, creating inconsistencies.</p>
<h1 id="cropping-map-tiles" class="relative group">Cropping map tiles <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#cropping-map-tiles" aria-label="Anchor">#</a></span></h1>
  
  
  
  
  

  
  
  <figure class="float-right w-[300px] mt-0 ml-20 md:block hidden">
    
      
      








  
    <picture
      class="float-right w-[300px] mt-0 ml-20 md:block hidden"
      
    >
      
      
      
      
        <source
          
            
              srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/screenshot_layers_sm_hu_c0569127e14af630.webp"
            
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="300"
        height="650"
        class="float-right w-[300px] mt-0 ml-20 md:block hidden"
        alt="Multiple screenshots forming a vertical strip"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/screenshot_layers_sm.jpg"
        
      />
    </picture>
  


    <figcaption class="text-center">Overlapping screenshots forming a continuous vertical strip</figcaption>
  </figure>



  
  
  
  
  

  
  
  <figure class="w-full px-8 md:hidden">
    
      
      








  
    <picture
      class="w-full px-8 md:hidden"
      
    >
      
      
      
      
        <source
          
            
              srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/screenshot_layers_sm_hu_c0569127e14af630.webp"
            
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="300"
        height="650"
        class="w-full px-8 md:hidden"
        alt="Multiple screenshots forming a vertical strip"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/screenshot_layers_sm.jpg"
        
      />
    </picture>
  


    <figcaption class="text-center">Overlapping screenshots forming a continuous vertical strip</figcaption>
  </figure>


<p>So now we have many <em>many</em> screenshots, each slightly offset from the other in a grid. The task ahead is to find out exactly the right centre crop of the screenshots such that each cropped image tiles perfectly with its neighbours. There are plenty of approaches within the Python ecosystem for stitching together images, <a href="https://github.com/OpenStitching/stitching" target="_blank" rel="noreferrer">OpenStitching</a> or plain <a href="https://www.geeksforgeeks.org/image-stitching-with-opencv/" target="_blank" rel="noreferrer">OpenCV</a> for instance. These are mostly concerned with creating a single output image rather than keeping the sources as tiles, so they&rsquo;re not as helpful as I first thought.</p>
<p>It&rsquo;s also worth stating that I chose Python for this because it was the lingua franca when I worked in <a href="https://nick.recoil.org/work/" target="_blank" rel="noreferrer">post-production</a>, and the habit has stuck. The code is usually clear, the image processing libraries are numerous and mature, and support is widespread. While I was developing this on my Windows box, I used <a href="https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux" target="_blank" rel="noreferrer">WSL</a> to provide the runtime, although it could have just as easily been <a href="https://www.python.org/downloads/windows/" target="_blank" rel="noreferrer">a native installation</a>.</p>
<p>There are some unique properties of our images to exploit. Since we&rsquo;ve ensured the spacing between each image is precisely controlled, there will be one square crop we can make which will work globally across the entire data set. We need only find the value once and we’re done.</p>
<p>The first step is to take an overly generous centre square of each screenshot and save it out as an intermediate tile. Stitching these together raw gives us obvious visual repetition at the borders, but we know we&rsquo;ve still got the correct square somewhere in this image. Once we have these intermediate tiles, we can also delete the source screenshots. Efficiency!</p>
<p>The next step is to incorporate a dynamic overlap value we can control while running a script to produce a composite mosaic of our tiles. We don&rsquo;t need anything sophisticated here, eyeballing the right overlap value was quick, and the right size for our tiles is 542 px.</p>
<p>You can see this approach in <a href="https://github.com/nickludlam/EnfusionMapMaker/blob/main/Scripts/crop_screenshots.py#L12" target="_blank" rel="noreferrer">crop_screenshots.py</a> which uses <a href="https://pypi.org/project/pillow/" target="_blank" rel="noreferrer">Python PIL</a> to perform the image operations, and contains separate constants <code>TILE_CROP_SIZE</code> and <code>TILE_OVERLAP</code>.</p>
<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256 0C114.6 0 0 114.6 0 256s114.6 256 256 256s256-114.6 256-256S397.4 0 256 0zM256 128c17.67 0 32 14.33 32 32c0 17.67-14.33 32-32 32S224 177.7 224 160C224 142.3 238.3 128 256 128zM296 384h-80C202.8 384 192 373.3 192 360s10.75-24 24-24h16v-64H224c-13.25 0-24-10.75-24-24S210.8 224 224 224h32c13.25 0 24 10.75 24 24v88h16c13.25 0 24 10.75 24 24S309.3 384 296 384z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">The eagle eyed among you might see the brightness change in the screenshot images. This is because Arma Reforger incorporates eye adaptation which automatically adjusts the exposure, so the darker ocean has a higher comparative exposure than when you&rsquo;re over land. It&rsquo;s why we have what look like JPEG artefacts around the coastal regions.</span>
</div>

<p class="clear-right"></p>
<p>A future piece of work will be to implement <a href="https://github.com/nickludlam/EnfusionMapMaker/blob/a9e27cc66c317ad28c5b949327099f95819583b6/Scripts/crop_screenshots.py#L334" target="_blank" rel="noreferrer">normalized cross-correlation using NumPy</a> to find the correct cropping value automatically, but this isn&rsquo;t necessary for the initial phase of this project. It&rsquo;s important to recognise when you&rsquo;re being pulled down a rabbit hole, and to keep version one simple.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/crop_square_1.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The initial center tile crop</figcaption>
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/crop_square_2.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">This tile should border on the previous tile</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class="w-64 rounded-md" src="images/tile_0_32_37.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Tile 0/32/37</figcaption>
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class="w-64 rounded-md" src="images/tile_0_32_38.jpg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Tile 0/32/38</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>When we have dialled in the required <code>TILE_OVERLAP</code> we bake this into a final export of our LOD 0 tile set, and move onto the JavaScript to bring it to life.</p>
<h1 id="implementing-leafletjs" class="relative group">Implementing LeafletJS <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#implementing-leafletjs" aria-label="Anchor">#</a></span></h1><p>Having looked at a number of different browser-based mapping frameworks, let&rsquo;s use <a href="https://leafletjs.com" target="_blank" rel="noreferrer">LeafletJS</a>. It&rsquo;s very established, has plenty of implementation examples, and seems to have a good ecosystem of community plugins should I want to extend the UX functionality later.</p>
<p>So let&rsquo;s briefly talk about coordinate systems and terminology. We&rsquo;re about to smash two incompatible groups into each other, game developers and GIS people.</p>

  
  
  
  
  

  
  
  <figure class="mx-auto md:w-[400px] w-full">
    <a href="https://mastodon.social/@acegikmo/109429307211544506">
      
      








  
    <picture
      class="mx-auto md:w-[400px] w-full"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_58814902986ad5bf.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_808ebfd1645219a0.webp 660w
            
              ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_fbcebcc0434030ad.webp 1024w
            
            
              ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_61914c0d2f978b71.webp 1320w
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="1440"
        height="1440"
        class="mx-auto md:w-[400px] w-full"
        alt="Freya Holmér&#39;s excellent coordinate system diagram"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_8a5981dc9e0a5f99.png" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_4aa39a573ed2e66f.png 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_8a5981dc9e0a5f99.png 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_24bdbc9354900115.png 1024w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/@Acegikmo@mastodon.gamedev.place_coordinate_system_diagram_hu_8488dfc17162a6c3.png 1320w
          "
          sizes="100vw"
        
      />
    </picture>
  

</a>
    <figcaption class="text-center">Freya Holmér&rsquo;s excellent coordinate system diagram</figcaption>
  </figure>


<p>The Enfusion game engine sits in the CORRECT quadrant, the upper right 😊</p>
<p><strong>+X</strong> is right, <strong>+Y</strong> is up and <strong>+Z</strong> is forward. The game world is flat, and all is good in the world.  But wait, what&rsquo;s that? We live on a sphere, and to use Latitude and Longitude to get around? <em>Ut oh!</em></p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_687af19140181766.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_b5215f5d01e07978.webp 660w
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_5213460ec29b0057.webp 865w
              
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_5213460ec29b0057.webp 865w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="865"
        height="486"
        class="mx-auto my-0 rounded-md"
        alt="A joke about Polar Bears vs Cartesian Bears"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_62919ce451ae69f.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_dee6ae1ecf3a237b.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme_hu_62919ce451ae69f.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme.jpg 865w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/polar_bear_cartesian_bear_meme.jpg 865w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">I don&rsquo;t know which one would win in a fight</figcaption>
</figure>
</p>
<p>That&rsquo;s right, LeafletJS was born out of needing to display real-world maps, so is fundamentally different from our rectilinear game world. Not only is there a spherical coordinate reference system, the map display thinks of the origin as being in the top left, and +Y is down.</p>
<p>To add insult to injury, we also have reversed thoughts on detail levels. So far I&rsquo;ve named the tiles according to the game-dev <a href="https://en.wikipedia.org/wiki/Level_of_detail_%28computer_graphics%29" target="_blank" rel="noreferrer">LOD</a> and <a href="https://en.wikipedia.org/wiki/Mipmap" target="_blank" rel="noreferrer">Mipmap</a> concepts. The ground truth is detail level 0, and all derivatives will become 1, 2, 3 etc., as we&rsquo;re simplifying information and losing detail. Leaflet, however, thinks in terms of <em>zoom levels</em>. You start out at zoom level 0, furthest away from your subject and as you zoom further into the map, you get progressively more detailed tiles.</p>
<p>Coordinate problems are always a huge <em>PITA</em>. They can introduce sign, off-by-one and scale errors all over the place unless you&rsquo;re disciplined. Thankfully LeafletJS has a number of approaches for tackling this, and makes it pleasant to work with with, once you know <em>how</em> to implement these conversions.</p>
<p>Firstly let&rsquo;s tackle the <strong>coordinate flip</strong>. Our tiles have an origin in the bottom left, so we flip the <code>Game Z</code> coordinate as it&rsquo;s converted into a <code>Tile Y</code>.  We implement the following basic extension to the <code>TileLayer</code> class, and the <code>+1</code> offset is there to account for flipping the origin from the bottom of the grid square to the top.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ts" data-lang="ts"><span class="line"><span class="cl"><span class="c1">// Our custom tile layer which inverts the Y axis
</span></span></span><span class="line"><span class="cl"><span class="nx">L</span><span class="p">.</span><span class="nx">TileLayer</span><span class="p">.</span><span class="nx">InvertedY</span> <span class="o">=</span> <span class="nx">L</span><span class="p">.</span><span class="nx">TileLayer</span><span class="p">.</span><span class="nx">extend</span><span class="p">({</span>
</span></span><span class="line"><span class="cl">  <span class="nx">getTileUrl</span>: <span class="kt">function</span><span class="p">(</span><span class="nx">tilecoords</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nx">tilecoords</span><span class="p">.</span><span class="nx">y</span> <span class="o">=</span> <span class="o">-</span><span class="p">(</span><span class="nx">tilecoords</span><span class="p">.</span><span class="nx">y</span> <span class="o">+</span> <span class="mi">1</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    <span class="k">return</span> <span class="nx">L</span><span class="p">.</span><span class="nx">TileLayer</span><span class="p">.</span><span class="nx">prototype</span><span class="p">.</span><span class="nx">getTileUrl</span><span class="p">.</span><span class="nx">call</span><span class="p">(</span><span class="k">this</span><span class="p">,</span> <span class="nx">tilecoords</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">  <span class="p">}</span>
</span></span><span class="line"><span class="cl"><span class="p">});</span>
</span></span></code></pre></div><p>Secondly, we need to <strong>invert the zoom numbering</strong> by adding <code>zoomReverse: true</code> to the tile layer, and specifying our maximum and minimum zoom levels.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ts" data-lang="ts"><span class="line"><span class="cl"><span class="c1">// Configure our custom tile layer to use zoomReverse to match our Level Of Detail numbering
</span></span></span><span class="line"><span class="cl"><span class="nx">tileLayer</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">L</span><span class="p">.</span><span class="nx">TileLayer</span><span class="p">.</span><span class="nx">InvertedY</span><span class="p">(</span><span class="s1">&#39;LODS/{z}/{x}/{y}/tile.jpg&#39;</span><span class="p">,</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="nx">maxZoom</span>: <span class="kt">MAX_ZOOM</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nx">minZoom</span>: <span class="kt">0</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nx">zoomReverse</span>: <span class="kt">true</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nx">bounds</span>: <span class="kt">bounds</span><span class="p">,</span>
</span></span><span class="line"><span class="cl"><span class="p">}).</span><span class="nx">addTo</span><span class="p">(</span><span class="nx">map</span><span class="p">);</span>
</span></span></code></pre></div><p>Lastly, we need to <strong>scale the coordinates</strong> correctly. This involves creating a custom <a href="https://leafletjs.com/reference.html#crs" target="_blank" rel="noreferrer">Coordinate Reference System</a> which scales the tiles such that the in-game coordinate system corresponds with the Leaflet coordinates. We firstly want to use <code>L.Projection.LonLat</code> as we want to specify coordinates in <code>X, Z</code> order. We also need to scale coordinates so that our tile size of 542 px is accounted for, where the standard tile size is 256 px.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ts" data-lang="ts"><span class="line"><span class="cl">  <span class="c1">// Transformation() takes values to form a 2x2 linear transformation matrix
</span></span></span><span class="line"><span class="cl">  <span class="nx">L</span><span class="p">.</span><span class="nx">CRS</span><span class="p">.</span><span class="nx">CustomSimple</span> <span class="o">=</span> <span class="nx">L</span><span class="p">.</span><span class="nx">Util</span><span class="p">.</span><span class="nx">extend</span><span class="p">({},</span> <span class="nx">L</span><span class="p">.</span><span class="nx">CRS</span><span class="p">,</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nx">projection</span>: <span class="kt">L.Projection.LonLat</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nx">transformation</span>: <span class="kt">new</span> <span class="nx">L</span><span class="p">.</span><span class="nx">Transformation</span><span class="p">(</span><span class="mi">1</span><span class="o">/</span><span class="mf">12.501</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="o">/</span><span class="mf">12.501</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
</span></span><span class="line"><span class="cl">    <span class="p">...</span>
</span></span><span class="line"><span class="cl">  <span class="p">});</span>
</span></span></code></pre></div><p>I&rsquo;ve <strong>no clue</strong> why the scaling factor turns out to be <strong>1/12.501</strong>. The whole CRS area of LeafletJS looked like a huge can of worms, so I ended up placing three map markers at the game coordinates of obvious visible landmarks and then tweaked the scaling factor until they all lined up. Good enough, let&rsquo;s move on.</p>
<p>The final job is to create a couple of helper functions to add one last aspect of coordinate system conversion. There is a hidden offset in what we&rsquo;ve created, and that&rsquo;s the center origin of the tiles, vs the corner origin of LeafletJS. We need to account for this using the following:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl"><span class="c1">// our tiles are 100 m, so it&#39;s 50 m to the centre
</span></span></span><span class="line"><span class="cl"><span class="nx">EDGE_TO_CENTER_OFFSET</span> <span class="o">=</span> <span class="mi">50</span> 
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="kd">function</span> <span class="nx">gameCoordsToLatLng</span><span class="p">(</span><span class="nx">gameCoordinate</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="k">return</span> <span class="nx">L</span><span class="p">.</span><span class="nx">latLng</span><span class="p">([</span><span class="nx">gameCoordinate</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">+</span> <span class="nx">EDGE_TO_CENTER_OFFSET</span><span class="p">,</span> <span class="nx">coordPair</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="nx">EDGE_TO_CENTER_OFFSET</span><span class="p">]);</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="kd">function</span> <span class="nx">latLngToGameCoords</span><span class="p">(</span><span class="nx">latlng</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="k">return</span> <span class="p">[</span><span class="nx">latlng</span><span class="p">.</span><span class="nx">lng</span> <span class="o">-</span> <span class="nx">EDGE_TO_CENTER_OFFSET</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="nx">latlng</span><span class="p">.</span><span class="nx">lat</span> <span class="o">-</span> <span class="nx">EDGE_TO_CENTER_OFFSET</span><span class="p">];</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>The final implementation of all the above can be seen in <a href="https://github.com/nickludlam/EnfusionMapMaker/blob/main/Web/reforger-map.js" target="_blank" rel="noreferrer">EnfusionMapMaker/Web/reforger-map.js</a>.</p>
<h1 id="zoom-levels" class="relative group">Zoom levels <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#zoom-levels" aria-label="Anchor">#</a></span></h1><p>Now we have a set of LOD 0 tiles, we can aggregate them into LOD 1, LOD 2 etc. The animation below shows how this system works. In the bottom left corner of each tile we&rsquo;re printing the LOD level, and the X and Y coordinate. The coordinates are based on scaling by powers of two. The tiles bounded by <code>64,64</code> to <code>65,65</code> become a single tile at <code>32,32</code>. Then the same again, tiles inside <code>32,32</code> to <code>33,33</code> become <code>16,16</code> and so on.</p>
<p>It&rsquo;s extremely useful to be able to turn on a debug layer in the map, and see exactly what coordinates were being used to fetch tiles, and this can be achieved with the following <code>GridLayer</code> definition.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-js" data-lang="js"><span class="line"><span class="cl"><span class="nx">L</span><span class="p">.</span><span class="nx">GridLayer</span><span class="p">.</span><span class="nx">GridDebug</span> <span class="o">=</span> <span class="nx">L</span><span class="p">.</span><span class="nx">GridLayer</span><span class="p">.</span><span class="nx">extend</span><span class="p">({</span>
</span></span><span class="line"><span class="cl">  <span class="nx">createTile</span><span class="o">:</span> <span class="kd">function</span> <span class="p">(</span><span class="nx">coords</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="kr">const</span> <span class="nx">tile</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nx">createElement</span><span class="p">(</span><span class="s1">&#39;div&#39;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    <span class="nx">tile</span><span class="p">.</span><span class="nx">style</span><span class="p">.</span><span class="nx">outline</span> <span class="o">=</span> <span class="s1">&#39;1px solid #111&#39;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    <span class="nx">tile</span><span class="p">.</span><span class="nx">style</span><span class="p">.</span><span class="nx">fontWeight</span> <span class="o">=</span> <span class="s1">&#39;bold&#39;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    <span class="nx">tile</span><span class="p">.</span><span class="nx">style</span><span class="p">.</span><span class="nx">fontSize</span> <span class="o">=</span> <span class="s1">&#39;14pt&#39;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    <span class="nx">tile</span><span class="p">.</span><span class="nx">style</span><span class="p">.</span><span class="nx">color</span> <span class="o">=</span> <span class="s1">&#39;red&#39;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    <span class="nx">tile</span><span class="p">.</span><span class="nx">innerHTML</span> <span class="o">=</span> <span class="p">[</span><span class="nx">MAX_ZOOM</span> <span class="o">-</span> <span class="nx">coords</span><span class="p">.</span><span class="nx">z</span><span class="p">,</span> <span class="nx">coords</span><span class="p">.</span><span class="nx">x</span><span class="p">,</span> <span class="o">-</span><span class="p">(</span><span class="nx">coords</span><span class="p">.</span><span class="nx">y</span><span class="o">+</span><span class="mi">1</span><span class="p">)].</span><span class="nx">join</span><span class="p">(</span><span class="s1">&#39;/&#39;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    <span class="k">return</span> <span class="nx">tile</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">  <span class="p">}</span>
</span></span><span class="line"><span class="cl"><span class="p">});</span>
</span></span></code></pre></div>





<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="videos/LOD_zooming.mp4" type="video/mp4" />
      <source src="videos/LOD_zooming.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">An animation showing the debug overlay with tile coordinates Z/X/Y</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>The two maps in the base game of Arma Reforger are called <em>Everon</em> and <em>Arland</em>. On Everon we need 5 zoom levels, but on Arland we only need 4 as it&rsquo;s much smaller. The code to do this is implemented in <a href="https://github.com/nickludlam/EnfusionMapMaker/blob/main/Scripts/create_zoom_levels.py#L132" target="_blank" rel="noreferrer">make_lod()</a> and <a href="https://github.com/nickludlam/EnfusionMapMaker/blob/main/Scripts/create_zoom_levels.py#L174" target="_blank" rel="noreferrer">merge_tiles()</a> which does a bounds query, a composite and a write across each of the tiles in the previous LOD.</p>
<p>Last I use <a href="https://imagemagick.org/script/mogrify.php" target="_blank" rel="noreferrer">Imagemagick&rsquo;s Mogrify tool</a> in the <a href="https://github.com/nickludlam/EnfusionMapMaker/blob/main/Scripts/compress_tiles.sh" target="_blank" rel="noreferrer">compress_tiles.sh</a> shell script to optimise the filesize of the JPEGs. This finely controls the quality, chroma subsampling, the colourspace and interlacing. This could have been integrated into the Python code, but it&rsquo;s better to treat this step as a requirement of making the website, so it&rsquo;s entirely optional and using existing tools well optimised for the task.</p>
<h1 id="map-stats" class="relative group">Map stats <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#map-stats" aria-label="Anchor">#</a></span></h1><p>At this stage we&rsquo;re done with the map tiles. We have all our zoom levels, and we support the game coordinate system to plot positions accurately. Stepping back, it&rsquo;s nice to get a sense of the scale of the thing we&rsquo;ve just created, as it&rsquo;s not particularly obvious when you&rsquo;re down working in the details.</p>
<table>
  <tr>
    <th>Map name</th>
    <td>Everon</td>
  </tr>
  <tr>
    <th>Game area</th>
    <td>13 km x 13 km</td>
  </tr>
  <tr>
    <th>Game distance per LOD 0 tile</th>
    <td>100 m</td>
  </tr>
  <tr>
    <th>Tile image size</th>
    <td>542 x 542 px</td>
  </tr>
  <tr>
    <th>Resolution</th>
    <td>~20 cm per pixel</td>
  </tr>
  <tr>
    <th>Total tile storage</th>
    <td>398 MB</td>
  </tr>
  <tr>
    <th>LOD filesizes (0-5)</th>
    <td>
      276 MB / 81 MB / 24 MB / 6.7 MB / 1.9 MB / 524 KB
    </td>
  </tr>        
</table>
<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M506.3 417l-213.3-364c-16.33-28-57.54-28-73.98 0l-213.2 364C-10.59 444.9 9.849 480 42.74 480h426.6C502.1 480 522.6 445 506.3 417zM232 168c0-13.25 10.75-24 24-24S280 154.8 280 168v128c0 13.25-10.75 24-23.1 24S232 309.3 232 296V168zM256 416c-17.36 0-31.44-14.08-31.44-31.44c0-17.36 14.07-31.44 31.44-31.44s31.44 14.08 31.44 31.44C287.4 401.9 273.4 416 256 416z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">I was surprised at how large the tile set was, but if you do the maths, LOD 0 is effectively a single <strong>70,000 x 70,000 px image</strong>, and would need <strong>14 GB</strong> to hold uncompressed in memory!</span>
</div>

<p class="mt-12"></p>
<hr />
<h1 id="extracting-location-data" class="relative group">Extracting location data <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#extracting-location-data" aria-label="Anchor">#</a></span></h1><p>Now we have our map, with accurate scale, we now need to generate a set of coordinates for every hidden supply cache on the map. We first look for commonalities within the entity and prefab system.</p>
<p>Every supply cache has an inventory object you can interact with. This is usually in the form of a wooden post.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_dd36cf5412fa97a0.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_b45f3a322dd6aede.webp 660w
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_b87c882959e22fe1.webp 712w
              
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_b87c882959e22fe1.webp 712w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="712"
        height="529"
        class="mx-auto my-0 rounded-md"
        alt="A screenshot from the Enfusion editor showing a wooden signpost"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_e41293d13954db66.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_ad1bc1a5470486df.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_hu_e41293d13954db66.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post.jpg 712w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post.jpg 712w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">The post which allow interaction with the inventory system</figcaption>
</figure>
</p>
<p>If we look at the component properties of this object in the Enfusion Workbench, we can see it contains two components which look unique to these types of object: <code>InventoryItemComponent</code> and <code>SCR_ResourceComponent</code>.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_13ccb892c59f3c8c.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_8af4196ccc72346b.webp 660w
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_6dba71b981a0ab0.webp 800w
              
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_6dba71b981a0ab0.webp 800w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="800"
        height="450"
        class="mx-auto my-0 rounded-md"
        alt="The two components we&rsquo;re looking for"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_a43f26ca076c4e58.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_5fbf27f262b890f8.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components_hu_a43f26ca076c4e58.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components.jpg 800w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/supply_cache_post_components.jpg 800w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">The unique components which denote a supply cache location</figcaption>
</figure>
</p>
<p>Now we know what components we&rsquo;re looking for, we need to find out how to make the queries. Unfortunately there&rsquo;s not a huge amount of documentation for making Enfusion editor tools. The best place to start is the <a href="https://github.com/BohemiaInteractive/Arma-Reforger-Samples/blob/main/SampleMod_WorkbenchPlugin/Scripts/WorkbenchGame/SamplePlugins/SampleWorldEditorTool.c" target="_blank" rel="noreferrer">Sample World Editor Tool</a> that Bohemia have published. This gives us the following hint.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-c#" data-lang="c#"><span class="line"><span class="cl"><span class="c1">// Get World Editor module</span>
</span></span><span class="line"><span class="cl"><span class="n">WorldEditor</span> <span class="n">worldEditor</span> <span class="p">=</span> <span class="n">Workbench</span><span class="p">.</span><span class="n">GetModule</span><span class="p">(</span><span class="n">WorldEditor</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="c1">// Get World Editor API</span>
</span></span><span class="line"><span class="cl"><span class="n">WorldEditorAPI</span> <span class="n">api</span> <span class="p">=</span> <span class="n">worldEditor</span><span class="p">.</span><span class="n">GetApi</span><span class="p">();</span>
</span></span><span class="line"><span class="cl"><span class="n">World</span> <span class="n">world</span> <span class="p">=</span> <span class="n">api</span><span class="p">.</span><span class="n">GetWorld</span><span class="p">();</span>
</span></span></code></pre></div><p>With a <code>World</code> handle, we can use the method <a href="https://community.bistudio.com/wikidata/external-data/arma-reforger/EnfusionScriptAPIPublic/interfaceBaseWorld.html#af8bf2b3173c1731a965bec513fbd98b0" target="_blank" rel="noreferrer">QueryEntitiesByAABB()</a> to query and filter a list of objects. The two callback methods we have available control whether an object should be added, and then when it&rsquo;s added, the callback controls whether the process should continue or not.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-c#" data-lang="c#"><span class="line"><span class="cl">  <span class="c1">// Init our array to store the entities</span>
</span></span><span class="line"><span class="cl">  <span class="n">m_entityResults</span> <span class="p">=</span> <span class="k">new</span> <span class="n">array</span><span class="p">&lt;</span><span class="n">IEntity</span><span class="p">&gt;;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="c1">// Declare our bounds</span>
</span></span><span class="line"><span class="cl">  <span class="n">vector</span> <span class="n">queryBoundsMin</span> <span class="p">=</span> <span class="k">new</span> <span class="n">Vector</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">,</span> <span class="m">0</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">  <span class="n">vector</span> <span class="n">queryBoundsMax</span> <span class="p">=</span> <span class="k">new</span> <span class="n">Vector</span><span class="p">(</span><span class="m">13000</span><span class="p">,</span> <span class="m">200</span><span class="p">,</span> <span class="m">13000</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="c1">// Perform our bounded query for entities</span>
</span></span><span class="line"><span class="cl">  <span class="kt">bool</span> <span class="n">queryResult</span> <span class="p">=</span> <span class="n">m_currentWorld</span><span class="p">.</span><span class="n">QueryEntitiesByAABB</span><span class="p">(</span>
</span></span><span class="line"><span class="cl">    <span class="n">queryBoundsMin</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">queryBoundsMax</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">filterEntitiesCallback</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">addEntitiesCallback</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="n">EQueryEntitiesFlags</span><span class="p">.</span><span class="n">ALL</span>
</span></span><span class="line"><span class="cl">  <span class="p">);</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="c1">// A basic test to look for both components being present</span>
</span></span><span class="line"><span class="cl">	<span class="kt">bool</span> <span class="n">filterEntitiesCallback</span><span class="p">(</span><span class="n">IEntity</span> <span class="n">e</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="k">if</span> <span class="p">(</span><span class="n">e</span><span class="p">.</span><span class="n">FindComponent</span><span class="p">(</span><span class="n">SCR_ResourceComponent</span><span class="p">)</span> <span class="p">&amp;&amp;</span>
</span></span><span class="line"><span class="cl">        <span class="n">e</span><span class="p">.</span><span class="n">FindComponent</span><span class="p">(</span><span class="n">InventoryItemComponent</span><span class="p">))</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="k">return</span> <span class="kc">true</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">return</span> <span class="kc">false</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">  <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="c1">// Find all the entities, so always return true</span>
</span></span><span class="line"><span class="cl">  <span class="kt">bool</span> <span class="n">addEntitiesCallback</span><span class="p">(</span><span class="n">IEntity</span> <span class="n">e</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">		<span class="n">m_entityResults</span><span class="p">.</span><span class="n">Insert</span><span class="p">(</span><span class="n">e</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">		<span class="k">return</span> <span class="kc">true</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">	<span class="p">}</span>
</span></span></code></pre></div><p>One feature of the Enfusion Workbench I love (and honestly there aren&rsquo;t many) is a hyperlinked console log. When you print a raw IEntity with <code>Print(entity)</code> it will focus that object in the game camera, and select it in the hierarchy. This way you can easily confirm whether all the filtered objects are indeed the type you&rsquo;re looking for.</p>
<p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_a0a004155c598e85.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_cdb54c6d2789cf99.webp 660w
            
              ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_c2127cb5b6a2e309.webp 1024w
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_897ab91a10c6e214.webp 1275w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="1275"
        height="404"
        class="mx-auto my-0 rounded-md"
        alt="The console log in the Enfusion editor"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_6816345ad687bd9.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_2ff423a36fbecbf0.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_6816345ad687bd9.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs_hu_e033a2f1df6c5d6b.jpg 1024w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/hyperlinked_console_logs.jpg 1275w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">Clicking on the purple text will highlight that entity</figcaption>
</figure>
</p>
<p>Now we have our list of IEntity instances, we can write their position to a simple JSON file using the <a href="https://community.bistudio.com/wikidata/external-data/arma-reforger/EnfusionScriptAPIPublic/interfaceFileIO.html" target="_blank" rel="noreferrer">FileIO</a> module.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-c#" data-lang="c#"><span class="line"><span class="cl"><span class="n">FileHandle</span> <span class="n">fh</span> <span class="p">=</span> <span class="n">FileIO</span><span class="p">.</span><span class="n">OpenFile</span><span class="p">(</span><span class="s">&#34;$profile:data.json&#34;</span><span class="p">,</span> <span class="n">FileMode</span><span class="p">.</span><span class="n">WRITE</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="k">if</span> <span class="p">(</span><span class="n">fh</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="n">fh</span><span class="p">.</span><span class="n">WriteLine</span><span class="p">(</span><span class="s">&#34;[&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">  <span class="k">foreach</span><span class="p">(</span><span class="n">IEntity</span> <span class="n">foundEntity</span> <span class="p">:</span> <span class="n">m_entityResults</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="c1">// Write a position array</span>
</span></span><span class="line"><span class="cl">    <span class="n">vector</span> <span class="n">position</span> <span class="p">=</span> <span class="n">foundEntity</span><span class="p">.</span><span class="n">GetOrigin</span><span class="p">();</span>
</span></span><span class="line"><span class="cl">    <span class="kt">string</span> <span class="n">formattedLocationLine</span> <span class="p">=</span> <span class="kt">string</span><span class="p">.</span><span class="n">Format</span><span class="p">(</span><span class="s">&#34;  [%1, %2, %3],&#34;</span><span class="p">,</span> <span class="n">position</span><span class="p">[</span><span class="m">0</span><span class="p">],</span> <span class="n">position</span><span class="p">[</span><span class="m">1</span><span class="p">],</span> <span class="n">position</span><span class="p">[</span><span class="m">2</span><span class="p">]);</span>
</span></span><span class="line"><span class="cl">    <span class="n">fh</span><span class="p">.</span><span class="n">WriteLine</span><span class="p">(</span><span class="n">formattedLocationLine</span><span class="p">);</span>    
</span></span><span class="line"><span class="cl">  <span class="p">}</span>
</span></span><span class="line"><span class="cl">  <span class="n">fh</span><span class="p">.</span><span class="n">WriteLine</span><span class="p">(</span><span class="s">&#34;]&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">  <span class="n">fh</span><span class="p">.</span><span class="n">Close</span><span class="p">();</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="n">Print</span><span class="p">(</span><span class="s">&#34;Failed to open file for writing&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>And out pops the array of coordinates, ready for us to plot using LeafletJS.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">[</span>
</span></span><span class="line"><span class="cl">  <span class="err">...</span>
</span></span><span class="line"><span class="cl">  <span class="p">[</span><span class="mf">3215.72</span><span class="p">,</span> <span class="mf">4.00446737</span><span class="p">,</span> <span class="mf">2948.74</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">  <span class="p">[</span><span class="mf">3231.16</span><span class="p">,</span> <span class="mf">0.0918469</span><span class="p">,</span> <span class="mf">2955.15</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">  <span class="p">[</span><span class="mf">3242.65</span><span class="p">,</span> <span class="mf">0.20875</span><span class="p">,</span> <span class="mf">2894.48</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">  <span class="p">[</span><span class="mf">3229.39</span><span class="p">,</span> <span class="mf">3.1775</span><span class="p">,</span> <span class="mf">2915.56</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">  <span class="p">[</span><span class="mf">3179.32</span><span class="p">,</span> <span class="mf">9.6688</span><span class="p">,</span> <span class="mf">2834.27</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">  <span class="err">...</span>
</span></span><span class="line"><span class="cl"><span class="p">]</span>
</span></span></code></pre></div><p>So now we can simply import this data into our LeafletJS setup and plot the X and Z coordinates.</p>
<h1 id="the-finished-map" class="relative group">The finished map <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-finished-map" aria-label="Anchor">#</a></span></h1><p>





<figure>
    
    








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_fc7888347e742b81.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_f27ee24f7ea934c6.webp 660w
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_d82194e84f73b97b.webp 1024w
              
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_d82194e84f73b97b.webp 1024w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="1024"
        height="800"
        class="mx-auto my-0 rounded-md"
        alt="A map with many map pins"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_20184e828c97d3a6.jpg" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_722a8f22112c0724.jpg 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches_hu_20184e828c97d3a6.jpg 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches.jpg 1024w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/marked_supply_caches.jpg 1024w
          "
          sizes="100vw"
        
      />
    </picture>
  

<figcaption class="text-center">Our marked locations</figcaption>
</figure>
</p>
<p>Success! We have our web page which accurately displays the location data, and everything is coming directly from the game engine 100% procedurally.</p>
<p>You can find the code on GitHub in <a href="https://github.com/nickludlam/EnfusionMapMaker/" target="_blank" rel="noreferrer">nickludlam/EnfusionMapMaker</a>, and a readme that takes you through most of the steps.</p>
<p>For the final website I took things a little further, using <a href="https://svelte.dev" target="_blank" rel="noreferrer">Svelte</a> to provide page templating and deployment options. There were a couple of challenges relating to making the client-side LeafletJS library play nicely with Typescript and building a static version of the site, but that&rsquo;s another thing which is out of scope for this particular article. The public repository contains the vanilla HTML and JavaScript to get everything working, and people can customise it as they like.</p>
<p>The current live implementation is <a href="https://reforger.recoil.org" target="_blank" rel="noreferrer">https://reforger.recoil.org</a>.</p>
<h1 id="some-topographic-fun" class="relative group">Some topographic fun <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#some-topographic-fun" aria-label="Anchor">#</a></span></h1><p>Out of curiosity, I wanted to see what it would be like to combine the very flat composite map (effectively pure <a href="https://en.wikipedia.org/wiki/Albedo" target="_blank" rel="noreferrer">albedo</a>) with the large-scale shading from the in-game base map from earlier. Simply compositing the layers using an <em>overlay</em> blend mode and some manipulation of the brightness ranges of the shaded map achieves the effect. I think it gives you a much better sense of where the mountainous areas are with the contrast in luminance and a stronger sense of the shallow water. It&rsquo;s not physically realistic but it looks great.</p>

  
  
  
  
  

  
  
  <figure class="mx-auto my-0 rounded-md">
    <a href="images/composite_map_large.jpg">
      
      








  
    <picture
      class="mx-auto my-0 rounded-md"
      
    >
      
      
      
      
        <source
          
            srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_d10603a176578d03.webp 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_69f2e88a2bb14e8.webp 660w
            
              ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_5b2546a8654bbb2.webp 1024w
            
            
              
                ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_39fae5e88dd06122.webp 1200w
              
            "
          
          sizes="100vw"
          type="image/webp"
        />
      
      <img
        width="1200"
        height="360"
        class="mx-auto my-0 rounded-md"
        alt="An image of the composite aerial imagery, an image of the grey and white shaded map from the Enfusion engine, combined into a final shaded map"
        loading="lazy" decoding="async"
        
          src="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_85d13291e9165c0e.png" srcset="https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_1db34244069ac8c7.png 330w,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_85d13291e9165c0e.png 660w
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending_hu_24260654ae6442fc.png 1024w
          
          
            ,https://nick.recoil.org/articles/making-maps-without-getting-lost/images/composite_map_blending.png 1200w
          "
          sizes="100vw"
        
      />
    </picture>
  

</a>
    <figcaption class="text-center">Compositing the two maps to get an exaggerated shaded version. Click to see the full-size image</figcaption>
  </figure>


<hr />
<h1 id="comparison-with-the-official-map" class="relative group">Comparison with the official map <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#comparison-with-the-official-map" aria-label="Anchor">#</a></span></h1><p>This shows a comparison between the <a href="https://armedassault.fandom.com/wiki/Everon_%28terrain%29" target="_blank" rel="noreferrer">official render of the island</a>, likely based on the <a href="https://www.artstation.com/artwork/aGnAXL" target="_blank" rel="noreferrer">2022 Houdini work</a>, and my generated map tiles.</p>
<div class="flex justify-center items-center">
  <img id="flipflop" src="images/original_everon.jpg" alt="Original and new island images" width="500" height="500" />
</div>
<script>
  document.addEventListener("DOMContentLoaded", function () {
    const flipFlopImage = document.getElementById("flipflop");
    const images = ["images/original_everon.jpg", "images/new_everon.jpg"];
    let currentIndex = 0;

    setInterval(function () {
      currentIndex = (currentIndex + 1) % images.length;
      flipFlopImage.src = images[currentIndex];
    }, 2000); // Change image every 2 seconds
  });
</script>
<p>As you can see, the original topography and field layout is identical, with only the forest zones being changed. This could be down to performance or gameplay considerations.</p>
<h1 id="conclusion" class="relative group">Conclusion <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#conclusion" aria-label="Anchor">#</a></span></h1><p>This was a fun little side project and took around 10 days in total, plus a little extra for this write-up and the git repository documentation. The website has proved to be <a href="https://www.reddit.com/r/ArmaReforger/comments/1j4wb45/interactive_supplies_map_for_everon_and_arland/" target="_blank" rel="noreferrer">very popular with players</a> and now has hundreds of daily users, which makes me happy.</p>
<p>I think this is also a great project to highlight <a href="https://en.wikipedia.org/wiki/T-shaped_skills" target="_blank" rel="noreferrer">T-shaped skills</a>. This kind of challenge can help encourage developers to push outside their comfort zone, and they can discover that their skills are more easily transferred into other domains than they may have first thought. You may not be a professional game developer, but that doesn&rsquo;t stop you from picking up enough to achieve what you want from a game engine.</p>
<p>It&rsquo;s also a good example of sticking to an MVP deliverable. It&rsquo;s no fun adding another half-completed project to the pile, and there&rsquo;s no shame in taking some shortcuts to get your work out the door. You can always return later and improve things incrementally.</p>
<p>Many thanks to <a href="https://tomarmitage.com" target="_blank" rel="noreferrer">Tom Armitage</a> and <a href="https://mathstodon.xyz/@ijm" target="_blank" rel="noreferrer">Ian McEwan</a> for feedback and help with this write-up.</p>
<div class="flex rounded-md bg-primary-100 px-4 py-3 dark:bg-primary-900">
  <span class="pe-3 text-primary-400">
    <span class="icon relative inline-block px-1 align-text-bottom"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M352 256c0 22.2-1.2 43.6-3.3 64H163.3c-2.2-20.4-3.3-41.8-3.3-64s1.2-43.6 3.3-64H348.7c2.2 20.4 3.3 41.8 3.3 64zm28.8-64H503.9c5.3 20.5 8.1 41.9 8.1 64s-2.8 43.5-8.1 64H380.8c2.1-20.6 3.2-42 3.2-64s-1.1-43.4-3.2-64zm112.6-32H376.7c-10-63.9-29.8-117.4-55.3-151.6c78.3 20.7 142 77.5 171.9 151.6zm-149.1 0H167.7c6.1-36.4 15.5-68.6 27-94.7c10.5-23.6 22.2-40.7 33.5-51.5C239.4 3.2 248.7 0 256 0s16.6 3.2 27.8 13.8c11.3 10.8 23 27.9 33.5 51.5c11.6 26 20.9 58.2 27 94.7zm-209 0H18.6C48.6 85.9 112.2 29.1 190.6 8.4C165.1 42.6 145.3 96.1 135.3 160zM8.1 192H131.2c-2.1 20.6-3.2 42-3.2 64s1.1 43.4 3.2 64H8.1C2.8 299.5 0 278.1 0 256s2.8-43.5 8.1-64zM194.7 446.6c-11.6-26-20.9-58.2-27-94.6H344.3c-6.1 36.4-15.5 68.6-27 94.6c-10.5 23.6-22.2 40.7-33.5 51.5C272.6 508.8 263.3 512 256 512s-16.6-3.2-27.8-13.8c-11.3-10.8-23-27.9-33.5-51.5zM135.3 352c10 63.9 29.8 117.4 55.3 151.6C112.2 482.9 48.6 426.1 18.6 352H135.3zm358.1 0c-30 74.1-93.6 130.9-171.9 151.6c25.5-34.2 45.2-87.7 55.3-151.6H493.4z"/></svg>
</span>
  </span>
  <span class="dark:text-neutral-300">UPDATE: It&rsquo;s nearly a month after the site went live, and it&rsquo;s served ~2M requests to well over 10,000 players! The map tiles have also been incorporated into two other projects, and you can see them incorporated into the post-match reports at the Reforger <a href="https://battleroyalemod.com" target="_blank" rel="noreferrer">Battle Royale Mod</a>.</span>
</div>

]]></content:encoded>
      </item>
    
      <item>
        <title>Custom Dreambooth Training For Stable Diffusion</title>
        <link>https://nick.recoil.org/articles/dreambooth/</link>
        <guid>https://nick.recoil.org/articles/dreambooth/</guid>
        <pubDate>Fri, 11 Nov 2022 13:22:51 UTC</pubDate>
        <description>&lt;![CDATA[A run-through of how I trained Stable Diffusion with my own face, with discussion of the results and some observations]]></description>
        <content:encoded>&lt;![CDATA[<p>The ML image synthesis topic has always been interesting, but it&rsquo;s exploded since August this year, when <a href="https://en.wikipedia.org/wiki/Stable_Diffusion" target="_blank" rel="noreferrer">Stable Diffusion</a> was made open source, for anyone to try. Since then, I&rsquo;ve been running a copy of Stable Diffusion locally on my NVIDIA 3070, using the <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui" target="_blank" rel="noreferrer">WebUI from Automatic111</a>.</p>
<p>I actually started out trying to get things working on my previous graphics card, a AMD 6700 XT, but the support and performance gap between <em>CUDA</em> and <em>ROCm</em> within the <strong>pytorch</strong> framework is vast, especially because I don&rsquo;t have a dual boot system to natively use Linux, where support is much better. I ended up picked up a second hand 3070 and it&rsquo;s been plain sailing, but only 8GB of RAM is a little restrictive.</p>
<h2 id="training-on-google-colab" class="relative group">Training on Google Colab <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#training-on-google-colab" aria-label="Anchor">#</a></span></h2><p>A little while later, a paper was published by Google on a technique called <a href="https://en.wikipedia.org/wiki/DreamBooth" target="_blank" rel="noreferrer">DreamBooth</a>, which allows for additional training and tuning of text-to-image models. People started implementing this on top of Stable Diffusion, but it started out slow and difficult to run on modest hardware.</p>
<p>In recent weeks people have been improving the original approach, finding optimisations to lower the time and hardware requirements. It&rsquo;s reached a point where I wanted to try it out, so I ran the process with <a href="https://github.com/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb" target="_blank" rel="noreferrer">https://github.com/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb</a>.</p>
<p>I assembled about 20 pictures from my photo library, cropped and blurred where appropriate, and resized to 512px square. I then supplemented them with a few specific photos I took against a white background, at various head angles. After reading some of the discussions, I also decided to run them all through <code>convert &lt;infile&gt; -flop &lt;outfile&gt;</code> to create horizontally mirrored copies. I don&rsquo;t know if this step is required, but it&rsquo;s what I chose, given training data is more important than additional training steps.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-12">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/training_set_montage.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The input training data I used, with an overemphasis on the head</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>You must also obey the naming convention for these images, with the format <code>&lt;your-prefix&gt; (%d).png</code>, so I used the following bash script:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="nv">counter</span><span class="o">=</span><span class="m">1</span>
</span></span><span class="line"><span class="cl"><span class="k">for</span> i in *.png
</span></span><span class="line"><span class="cl"><span class="k">do</span>
</span></span><span class="line"><span class="cl">  cp <span class="nv">$i</span> <span class="s2">&#34;nickludlam (</span><span class="si">${</span><span class="nv">counter</span><span class="si">}</span><span class="s2">).png&#34;</span>
</span></span><span class="line"><span class="cl">  <span class="nb">let</span> <span class="nv">counter</span><span class="o">=</span>counter+1
</span></span><span class="line"><span class="cl"><span class="k">done</span>
</span></span></code></pre></div><p>Which made each image conform to the  format. I uploaded these to Google Drive, in a folder called <strong>DreamBoothTrainingImages/</strong>.</p>
<p>I also decided to pay for the instance. You can run it for free, but risk termination at any point, so the prospect of losing 3 hours of work was definitely worth the 3 credits I ended up using.</p>
<p><strong>NOTE:</strong> I would actually add more variety to this if I went through the process again. In testing out prompts, it very strongly wants to focus in on my head, and it&rsquo;s difficult to get images that show shoulders or upper torso. In order to make wider compositions more likely to be created, you need to train it with photos of similar poses.</p>
<h2 id="google-colab" class="relative group">Google Colab <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#google-colab" aria-label="Anchor">#</a></span></h2><p>When <a href="https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb" target="_blank" rel="noreferrer">running the notebook on Colab</a>, there are a few buttons that need clicking, and prompts that needed filling in. Although the notebook is fairly well documented, not all of it was clear, so here&rsquo;s a summary of what I entered into the page:</p>
<table>
  <thead>
      <tr>
          <th>Variable name</th>
          <th>Value</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Huggingface_Token</strong></td>
          <td><em>[copy from <a href="https://huggingface.co/settings/tokens" target="_blank" rel="noreferrer">Huggingface</a>]</em></td>
      </tr>
      <tr>
          <td><strong>Session_Name</strong></td>
          <td>nickludlam</td>
      </tr>
      <tr>
          <td><strong>IMAGES_FOLDER_OPTIONAL</strong></td>
          <td>/content/gdrive/MyDrive/DreamBoothTrainingImages</td>
      </tr>
      <tr>
          <td><strong>Contains_faces</strong></td>
          <td>Male</td>
      </tr>
      <tr>
          <td><strong>Crop_images</strong></td>
          <td>Unchecked</td>
      </tr>
      <tr>
          <td><strong>Training_Steps</strong></td>
          <td><em>[should be set according to their suggestions, but I found that this caused overtraining, possibly because of my mirrored duplicate images]</em></td>
      </tr>
  </tbody>
</table>
<p>I left everything else as default. The repo is being updated frequently, so double check everything you&rsquo;re typing in, as things might have changed by the time you run this.</p>
<h2 id="training" class="relative group">Training <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#training" aria-label="Anchor">#</a></span></h2><p>The training is split into two halves. First is the <em>text encoder</em> training, and then comes the <em>unet</em> training. For me, the first stage took about one hour, and the second stage about two hours. You&rsquo;ll see an output like this:</p>
<pre tabindex="0"><code>Training the text encoder with regularization...
&#39;########:&#39;########:::::&#39;###::::&#39;####:&#39;##::: ##:&#39;####:&#39;##::: ##::&#39;######:::
... ##..:: ##.... ##:::&#39;## ##:::. ##:: ###:: ##:. ##:: ###:: ##:&#39;##... ##::
::: ##:::: ##:::: ##::&#39;##:. ##::: ##:: ####: ##:: ##:: ####: ##: ##:::..:::
::: ##:::: ########::&#39;##:::. ##:: ##:: ## ## ##:: ##:: ## ## ##: ##::&#39;####:
::: ##:::: ##.. ##::: #########:: ##:: ##. ####:: ##:: ##. ####: ##::: ##::
::: ##:::: ##::. ##:: ##.... ##:: ##:: ##:. ###:: ##:: ##:. ###: ##::: ##::
::: ##:::: ##:::. ##: ##:::: ##:&#39;####: ##::. ##:&#39;####: ##::. ##:. ######:::
:::..:::::..:::::..::..:::::..::....::..::::..::....::..::::..:::......::::

Progress:|██████████████████       | 73% 2136/2940 [53:35&lt;20:07,  1.50s/it, loss=0.476, lr=6.2e-7] nickludlam
</code></pre><p>The final step in the process is to convert the training data into a checkpoint file and copy it to your Google Drive. In this case, it&rsquo;s:</p>
<pre>My Drive/Fast-Dreambooth/Sessions/nickludlam/nickludlam.ckpt</pre>
<p>Download this file and drop it into the <strong>models/Stable-diffusion/</strong> directory inside your automatic1111 repository installation, and run the UI, and your checkpoint will be available in the top checkpoint dropdown.</p>
<p>Don&rsquo;t forget to spin down the instance once you&rsquo;re done!</p>
<h2 id="testing" class="relative group">Testing <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#testing" aria-label="Anchor">#</a></span></h2><p>I experimented with a number of prompts based on some posts I found online:</p>
<ul>
<li><a href="https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/4" target="_blank" rel="noreferrer">https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/4</a></li>
<li><a href="https://www.reddit.com/r/StableDiffusion/comments/ya4zxm/dreambooth_is_crazy_prompts_workflow_in_comments/" target="_blank" rel="noreferrer">https://www.reddit.com/r/StableDiffusion/comments/ya4zxm/dreambooth_is_crazy_prompts_workflow_in_comments/</a></li>
<li><a href="https://www.reddit.com/r/StableDiffusion/comments/ygi228/prompts_for_trained_dreambooth_generations_of/" target="_blank" rel="noreferrer">https://www.reddit.com/r/StableDiffusion/comments/ygi228/prompts_for_trained_dreambooth_generations_of/</a></li>
<li><a href="https://www.reddit.com/r/StableDiffusion/comments/xu7cg8/using_dreambooth_to_create_art_of_anime/" target="_blank" rel="noreferrer">https://www.reddit.com/r/StableDiffusion/comments/xu7cg8/using_dreambooth_to_create_art_of_anime/</a></li>
<li><a href="https://publicprompts.art/comic-art/" target="_blank" rel="noreferrer">https://publicprompts.art/comic-art/</a></li>
<li><a href="https://lexica.art/" target="_blank" rel="noreferrer">https://lexica.art/</a></li>
</ul>
<p>One of the biggest issues I had was finding the right balance between Sampling Steps, CFG Scale and word emphasis within the prompt. If one of the suggested prompts is the following:</p>
<pre tabindex="0"><code>photo of nickludlam as an astronaut, glasses, helmet in
alien world abstract oil painting, greg rutkowski, detailed face
</code></pre><p>I found that my net was overtrained, and would just repeatedly make images of the training data. One way around this was to emphasize the target words in the prompt. So for the above, you wrap key terms in round brackets to become:</p>
<pre tabindex="0"><code>photo of nickludlam as an ((astronaut)), glasses, (helmet) in
alien world abstract oil painting, greg rutkowski, detailed face
</code></pre><p>You can add more brackets for additional emphasis. I also tried many of the different sampling methods. It took a LOT of experimentation to get some nice images out, so be patient.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center md:px-12">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/montage.jpeg"
        alt="First experiments with my trained data set" style="margin: 0" />
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/montage2.jpeg"
        alt="Results from the second day of experimentation" style="margin: 0" />
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>As you can see, I achieved some nicely varied results, given enough time and patience with prompt crafting.</p>
<h2 id="overtraining" class="relative group">Overtraining <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#overtraining" aria-label="Anchor">#</a></span></h2><p>When attempting to use the newly trained checkpoint file, I often find that it prefers to give me results which just feature a head, even when I&rsquo;ve asked for an image with shoulders or upper body. As mentioned earlier, I would definitely include more variety in the stance from the subject.</p>
<p>I&rsquo;ve also seen my face present in a lot of other images of white males, even when my specific keyword is not in the actual prompt, which I believe is a strong indicator of overtraining.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center md:px-12">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/1_original.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A portrait of Jeff Goldblum from StableDiffusion v1.5</figcaption>
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/1_overtrained.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A portrait of Jeff Goldblum from my trained checkpoint</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center md:px-12">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/2_original.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A portrait of Brad Pitt from StableDiffusion v1.5</figcaption>
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/2_overtrained.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A portrait of Brad Pitt from my trained checkpoint</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center md:px-12">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/3_original.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A portrait of Tom Cruise from StableDiffusion v1.5</figcaption>
      
    </picture>
  </figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="images/3_overtrained.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A portrait of Tom Cruise from my trained checkpoint</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>I tested this by using the same conditions for both checkpoints, for instance the middle one is derived from:</p>
<pre tabindex="0"><code>brad pitt wearing a tuxedo, portrait, highly detailed, digital painting, artstation,
concept art, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha

Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 3120218309, Size: 512x512
</code></pre><h2 id="conclusion" class="relative group">Conclusion <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#conclusion" aria-label="Anchor">#</a></span></h2><p>Overall I&rsquo;m really happy with the results I can get, but there are parts of the process I&rsquo;d do differently next time.</p>
<p>Firstly, as mentioned above, there was not enough variety in the training data. Having photos where I&rsquo;m further from the camera is important. Some full-body and upper torso shots would help with the variety and creativity of prompts.</p>
<p>I&rsquo;m also unsure whether using flopped images is as necessary where you&rsquo;re effectively symmetrical. It is likely still useful where you&rsquo;re looking to the left or right, where a horizontally flipped image will look sufficiently different. This wouldn&rsquo;t work if you have facial features like moles or anything else which would be obviously flipped too.</p>
<p>I would also take advantage of the ability to test checkpoints and resume additional training as desired. That way you can stop when the results are at the right balance between accuracy of rendering your likeness, but yet it retains creativity in composition.</p>
<h2 id="other-writeups" class="relative group">Other writeups <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#other-writeups" aria-label="Anchor">#</a></span></h2><p>There&rsquo;s a very detailed writeup of the same process on <a href="https://bytexd.com/how-to-use-dreambooth-to-fine-tune-stable-diffusion-colab/" target="_blank" rel="noreferrer">bytexd.com</a>, and an additional <a href="https://huggingface.co/blog/dreambooth" target="_blank" rel="noreferrer">writeup from huggingface</a> that goes into the differences between this technique and the original textual inversion, and lots of images showing the effects of different learning rates.</p>
]]></content:encoded>
      </item>
    
      <item>
        <title>Playdeo Part 3 - Technology &amp; Tools</title>
        <link>https://nick.recoil.org/work/playdeo-technology/</link>
        <guid>https://nick.recoil.org/work/playdeo-technology/</guid>
        <pubDate>Mon, 13 Jun 2022 16:43:00 &#43;0000</pubDate>
        <description>&lt;![CDATA[]]></description>
        <content:encoded>&lt;![CDATA[<p>Work at Playdeo involved solving unique challenges, and I&rsquo;ve unpacked some of them in a little more detail here. Below is a playthrough of Episode One in the game, to give you a sense of what the game looked like.</p>
<div class="flex justify-center">
  <div class="mx-4 md-12 lg:mx-20 w-full md:w-1/2 aspect-video">
    <iframe class="w-full h-full" sandbox="allow-same-origin allow-scripts allow-popups" src="https://crank.recoil.org/videos/embed/0b1a3fa8-2b0b-41ad-b866-a3d444ecbfd6" frameborder="0" allowfullscreen></iframe>
  </div>
</div>
<h2 id="smart-double-buffered-video" class="relative group">Smart, double-buffered video <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#smart-double-buffered-video" aria-label="Anchor">#</a></span></h2><p>In Playdeo&rsquo;s games, we compress and linearise hundreds of separate video clips into one large MP4 file, and when we need different camera angles or video sequences, we ask the video player to play from different timestamps in the file.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/video_file_layout_simplified.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">An example of the internal architecture of our MP4 files</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>There are two classes of clips. <strong>Sequences</strong> are clips of narrative video, usually featuring dialogue, and they are typically played back in order, and only once to the player. They tell the story. The other class of clips are <strong>Camera Coverage</strong>, and they are typically environmental shots, and are designed to loop. These are the clips we use when you are navigating Avo around the world.</p>
<p>During our prototyping phase, we were mostly concerned with the content and interactivity of the visual frame, and not so much about narrative and story. All that changed with our Alpha 2 prototype of Avo, and for the first time we had an actor, a script and a need to edit different takes and camera angles together into a coherent timeline. This instantly presented us with a problem; Moving video playback to different portions of a single video file with a <code>seek</code> command was much much slower when there&rsquo;s an audio track. Without audio, the seek response is near instantaneous, and with audio, it drops down to about 0.2s. That might not sound a lot, but it&rsquo;s extremely noticable when playing the game, and each camera cut or edit freezes the frame before delivering new frames.</p>
<p>The solution was to run two video decoders in parallel, both reading from the same MP4 file. Modern iOS hardware made this feasible in most newer devices and for the older iPhone and iPad models we were careful to leave an option to disable the second video player entirely. This allowed us to support devices like <a href="https://en.wikipedia.org/wiki/IPad_Air" target="_blank" rel="noreferrer">the original iPad Air from 2013</a>, at the expense of the improved edits. It left a small pause in video playback each time there was a cut, but it didn&rsquo;t detract too much from gameplay.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/double_buffered_playback.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">An example of how each player takes it in turns to play the video clips</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The system internally was known as <em>AV2</em>, and made use of an asynchronous queue system, and two consumer classes representing each underlying instance of the video player. You could either request video be played immediately, or after the current clip has finished, in more of a playlist fashion. It would also be smart enough to understand that when asked to play a clip that immediately followed the currently playing clip inside the MP4 file, it would simply let the playhead run on into the new video in the currently active player, saving a swap to the inactive player, and creating a smoother experience. To put that another way, if Clip A, B and C are adjacent within the video file, and you asked to play them all back, instead of splitting it up across the two different video players, it plays all three back to back in the same video player.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/naive_clip_playback.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Clips played back in a naive manner, distributed between video players</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/smart_clip_playback.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Smart clip playback, where consecutive clips are played by the same player</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The other further development we made was to give the video playback plugin complete autonomy over when it should either loop back to a starting frame, stop delivering video. On mobile devices you commonly suffer interruptions that stop your game from being the primary application in focus. These interruptions would cause problems with video playback synchronisation, and when you return from an interruption the video player might have advanced beyond the limits of the clip you wanted to play. Because the video texture is inside Unity is marked as <a href="https://docs.unity3d.com/ScriptReference/Texture2D.CreateExternalTexture.html" target="_blank" rel="noreferrer">External</a>, if the playhead of the video player has strayed beyond the clip it was supposed to play, the video frame overwrites the previous texture in memory. The players see flashes of video frames that are out of sequence, and it destroys the gameplay experience.</p>
<p>To get around this issue, every time we request certain frame ranges to be played back, the video player creates guard zones it can autonomously respond to during playback with no connection to Unity. AVFoundation on iOS has this <em>incredibly</em> useful function called <a href="https://developer.apple.com/documentation/avfoundation/avplayer/1388027-addboundarytimeobserver" target="_blank" rel="noreferrer">addBoundaryTimeObserver(forTimes:queue:using:)</a> for exactly this use-case.</p>
<p>The video plugin grew in complexity over the lifespan of Avo, and thankfully AVFoundation is an incredibly rich media API to sit on top of, which prevented this code from spiralling into something much more complex and time consuming. We also got an Android build running with a custom video player we implemented in house, but this never saw the light of day in a full release, sadly.</p>
<h2 id="bluetooth-latency-and-pathedl" class="relative group">Bluetooth latency and PathEDL <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#bluetooth-latency-and-pathedl" aria-label="Anchor">#</a></span></h2><p>A few months after getting the double buffered video player system working, Apple announced the removal of the headphone jack on their iOS devices, and so started a huge increase in the number of bluetooth audio devices in everyday use. As these bluetooth devices became more commonplace, we started to use them with our laptops.</p>
<p>For the longest time, I was puzzled over the variable seek response time for video in our system. Sometimes video would be quick and responsive, other times it would feel sluggish and slow to react. It was a number of weeks before I put two and two together and realised it was my own use of Bluetooth headphones while programming that would be the root cause of this change.</p>
<p>As mentioned in the <a href="#smart-double-buffered-video">Smart, double-buffered video</a> section, there&rsquo;s an inherent latency of around 0.2s for video with an audio track. Bluetooth audio adds its own latency on top of this, and can vary between 0.2s and 0.3s depending on the manufacturer. So during gameplay, we need to account for a delay in any request to play new video from 0.2s up to 0.5s. This presented a problem for us in the way Avo&rsquo;s movement triggers different camera angles.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/player_alternating.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Alternating between Players, including the variable pre-roll depending on audio latency</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Luckily on iOS it&rsquo;s very easy to retrieve the exact value of this latency from the shared audio session:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-objc" data-lang="objc"><span class="line"><span class="cl"><span class="k">static</span> <span class="kt">float</span> <span class="nf">GetAudioOutputLatency</span><span class="p">()</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="k">return</span> <span class="p">[[</span><span class="n">AVAudioSession</span> <span class="n">sharedInstance</span><span class="p">]</span> <span class="n">outputLatency</span><span class="p">];</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span></span></span></code></pre></div>
<p>So the total latency in the system is equal to <em>0.2s + AudioOutputLatency</em>. Avo&rsquo;s line drawing method of movement is actually the perfect uniform system for accomodating future predictions. If it takes us 0.4s to swap cameras as Avo moves, all we need to do is make Avo&rsquo;s trigger point be constantly ahead of him by 0.4s, thus triggering perfectly by the time his legs catch up to the future prediction.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex md:px-24 justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/latency_demo.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/latency_demo.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Bluetooth latency demonstration. </figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>In the video above, there&rsquo;s a demonstration that shows the difference Bluetooth audio makes to the latency. The white cube in front of Avo represents the distance from his current position that directly correlates to the video seek delay. The longer the audio latency, the further away the cube is, and the faster Avo moves, the further away the cube gets.</p>
<p>For Avo&rsquo;s standard speed, the cube is roughly one width away from him when not using Bluetooth. This then increases to roughly two widths of the cube when Bluetooth audio is used. This is a small difference at his relatively slow standard speed, but within the game we give him a little speed boost every time he collects a jelly bean. This means his speed can be much greater depending on the situation. This means that the future prediction distance is correspdondingly much larger.</p>
<p>As my work on accounting for camera change latency started to bear fruit, it became apparent that we now had an adjacent problem to contend with. If you happened to draw particular lines which interacted badly with our camera zones, like drawing multiple loops that graze a closeup camera, the resulting video as he walked the line was dizzying, as it would rapidly cut to a closeup, then back out to a wide shot, over and over again. Along with that particular extreme scenario, there were numerous instances of these bad lines, and the player had no way of knowing if their path would look bad while being traversed. We needed a generalised solution.</p>
<p>The answer was once again in the determinism of the line drawing movement system. Since we only allow players to draw a line which is navigable, the line is a perfect representation of the future. This line <strong>will</strong> be walked, and the only thing that can change is the speed at which it&rsquo;s traversed.</p>
<p>Up until then, the camera system was entirely based on collider triggers. When a collider entered into a zone, a camera change would be triggered immediately. The latency solution meant that the collider causing the change was in advance of Avo&rsquo;s current position by 0.2-0.5 seconds, but it was still real-time. We needed to move to a solution where the entire line was simulated, and every future camera change collected by a system I called <strong>PathEDL</strong>.</p>
<p>In video editing, the <a href="https://en.wikipedia.org/wiki/Edit_decision_list" target="_blank" rel="noreferrer">Edit Decision List</a> represents the desired arrangement of video frames that an editor wants to arrange for playback. In our case, every line drawn by the player would be its own little video edit, with a beginning a middle and an end. If we sample the points down the line, and correlate them with the camera zones they pass through, then we can form a list of all potential cameras to use for every point along the line.</p>
<p>We can then run a set of filters for these points, and derive a final set of camera cuts best suited to frame Avo as he walks, or in some cases zooms down the line.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/path_edl.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The PathEDL filter system in the Unity editor</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>In the above image, you can see an example of the PathEDL system at work. In the right pane, you can see the geometry of the level, including the camera visibility zones, and his currently drawn path as the standard dotted black line.</p>
<p>In the bottom left, you can see the timeline which give you a visual representation of all the cuts that will occur. The overall duration for this line is 6.9 seconds, with 5 different camera angles that will frame Avo.</p>
<p>The key to this system is that it applies rules not only spatially, but across time as well. In the top left, you can see several filters that are configured to apply to Avo&rsquo;s movement path.</p>
<table class="table-auto">
  <thead>
    <tr>
      <th>Filter Name</th>
      <th>Description</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Start Zone Minimum Time Filter</td>
      <td>Ensures that we dwell for a minimum of 0.4s before making our first cut. This helps the player understand the first cut with good spatial awareness</td>
    </tr>
    <tr>
      <td>End Zone Minimum Time Filter</td>
      <td>Ensures that the final camera cut that will occur for this path is at least 0.5s long. It provides a sense of stability and finality to the sequence of cuts that will frame Avo's movement</td>
    </tr>
    <tr>
      <td>Remove Short Edits Filter</td>
      <td>Removes any camera edits within the PathEDL that would last for less than ~1 second. This helps make Avo's movement feel less frantic</td>
    </tr>
    <tr>
      <td>Select Alternate At Screen Edge Filter</td>
      <td>ensures that short lines drawn at the edge of the screen select any cameras that are available other than the existing one. This means people can escape being trapped in a view they do not want</td>
    </tr>
    <tr>
      <td>Exclude Camera Zone Type Filter</td>
      <td>Ensures that short lines drawn at the edge of the screen select any cameras that are available other than the existing one. This means people can escape being trapped in a view they do not want</td>
    </tr>
  </tbody>
</table>
<p>As you change or toggle these filters, you get a live preview of how it will affect the edit, and construct a filter set that makes appropriately player-friendly choices. This had an absolutely transformative effect on the game, and walking around felt far less unpleasant and awkward.</p>
<p>One of our developers <a href="https://www.linkedin.com/in/gnascim" target="_blank" rel="noreferrer">Geraldo Nascimento</a> really took the initial idea and extended it with a lot of great looking visualisation tools and quality of life improvements for later work.</p>
<h2 id="the-sequencer" class="relative group">The Sequencer <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-sequencer" aria-label="Anchor">#</a></span></h2><p>The other distinct aspect of working with video inside Unity was a matter of timing. The video playback code was entirely separate from Unity, and so we needed to integrate video-time with Unity-time in a seamless way. Jon had previously used a node-based system used by UsTwo games on Monument Valley, and suggested we use something similar. We ended up with a tool we called the <em>Sequencer</em>.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/sequencer_example.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">A typical Sequencer graph</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The graph starts executing from the top left node, and continues down each connection as each node finishes. Nodes can either be of type <strong>void</strong> and span no frames, or of type <strong>enumerator</strong> and span one or more frames. The purple nodes with thumbnails show the <em>Play Video</em> nodes, where additional node chains can be triggered by reaching specific frames in the video.</p>
<p>This allowed us to seamlessly mix <em>player events</em> and <em>video events</em> during the level design process, and leave the design open, so we responded to the flow. We could change something that was previously triggered by the player at any point, and incorporate it to be triggered at a certain frame in the video, or vice versa.</p>
<p>A basic wait node looks something like this:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-csharp" data-lang="csharp"><span class="line"><span class="cl"><span class="kd">public</span> <span class="k">class</span> <span class="nc">WaitForTimeNode</span> <span class="p">:</span> <span class="n">SequenceNode</span><span class="p">,</span> <span class="n">ISequenceNodeEnumerator</span>
</span></span><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="kd">public</span> <span class="k">new</span> <span class="kd">static</span> <span class="kt">string</span> <span class="n">ReadableName</span> <span class="p">=</span> <span class="s">&#34;Delay/Wait For Time&#34;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="kd">public</span> <span class="kt">float</span> <span class="n">WaitTimeInSeconds</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="kd">public</span> <span class="n">IEnumerator</span> <span class="n">GetEnumerator</span><span class="p">()</span>
</span></span><span class="line"><span class="cl">    <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="kt">float</span> <span class="n">startTime</span> <span class="p">=</span> <span class="n">Time</span><span class="p">.</span><span class="n">time</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">        <span class="k">if</span> <span class="p">(!</span><span class="n">isPlayerFastForwarding</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">        <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="k">while</span> <span class="p">(</span><span class="n">Time</span><span class="p">.</span><span class="n">time</span> <span class="p">&lt;</span> <span class="n">startTime</span> <span class="p">+</span> <span class="n">WaitTimeInSeconds</span> <span class="p">&amp;&amp;</span>
</span></span><span class="line"><span class="cl">              <span class="p">!</span><span class="n">isPlayerFastForwarding</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">            <span class="p">{</span>
</span></span><span class="line"><span class="cl">                <span class="k">yield</span> <span class="k">return</span> <span class="kc">null</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">            <span class="p">}</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span></span></span></code></pre></div>
<p>As we developed this system from extremely humble beginnings, it was clear to us that this architecture was very powerful. It took more and more of a hold on how we thought to construct authored experiences in the level, and we improved it with every title we worked on. Later iterations featured a hugely improved authoring UI, and we also expanded the node flow to allow for looping and light branching logic.</p>
<p>The <code>isPlayerFastForwarding</code> boolean is an interesting little feature related to two things. One was having the ability for players to <em>skip video sequences</em> during gameplay, and the other is <em>level state</em>. When we started to work on Avo, we had little to no idea about the player&rsquo;s game progress. Would we need to store the state of doors and keys, for instance? What about spawned enemies? A player inventory perhaps? We had no idea!</p>
<p>In order to keep it extremely open and flexible, we based our solution around authored <em>checkpoints</em>. In any given level, a number of linear checkpoints are created, each of which might be to a sequence that was run to mutate the level state. We store which checkpoint a player has reached, and upon a load of that level, we step through each checkpoint and play the associated sequences until we hit the last one, and let the player carry on. This way we can leave level designers complete autonomy, as long as progression-based state mutation is always handled by the sequencer.</p>
<h2 id="the-video-pipeline" class="relative group">The video pipeline <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-video-pipeline" aria-label="Anchor">#</a></span></h2><p>As the amount of data we had to process was undergoing a dramatic increase, we also had to keep on top of the data pipeline. During my years working at Framestore, I knew that for complex setups using multiple pieces of software, the number one source of issues was human error. It&rsquo;s a balancing act. A pipeline that&rsquo;s too restrictive will make unforseen errors cost a great deal of time, as they bottleneck on access to the developers. Not restrictive enough, you let problems roll downhill into Unity, and they become much harder to diagnose.</p>
<p>We refined the ingest process over many months, and initially I was the one person who was responsible for running and maintaining this process, but that slowly shifted, and became a job mostly run by our editors or other developers on the team. This went hand in hand with an increasingly complex process, as we had to accommodate new features like subtitles, compound clips, and post-production software changes. The ingest script grew to about 3500 lines of Python, and contained parsers for FCPXML, Resolve CSV files, PFTrack data, FBXs and video files.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/redjune_ingest_process.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The video data pipeline for Avo</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Half the battle with this type of work is to find ways of making concrete cross-references between different files running through different pieces of software. Some software is scriptable, and we made use of this in PFTrack, where an export of the data would also create additional metadata files important for these connections. Sometimes you need to ask for specific naming conventions to be obeyed, and you enforce that during ingest.</p>
<p>Over the course of about 3 months, we gradually went from a situation where I had to run every single ingest step to one where others were self sufficient. A surprising amount of this time was spent refining error messages that would allow the user to diagnose issues themselves. This wasn&rsquo;t always easy, and required tracking far more data than you&rsquo;d make use of in the final output, like specific line numbers of input files which might go on to cause issues later.</p>
<p>I class this pipeline as generally successful, as we were able to bring freelancers into the company and train them in using this within a few days. Ultimately this pipeline would be replaced in later games with one based entirely on <em>Playdeo Capture</em>, but during the period before that, the pipeline tools were solid and saved us a huge amount of time.</p>
<h2 id="playdeo-capture" class="relative group">Playdeo Capture <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#playdeo-capture" aria-label="Anchor">#</a></span></h2><p>After Avo shipped, we could see the value in being able to shrink our video pipeline down. Using three different pieces of software every time we want to set up a new scene, with the photogrammetry and tracking taking at least a day was very awkward. It prevented us from quickly prototyping ideas, and was prone to human error. Our toolset in Unity was slowly growing to make scene assembly and authoring far quicker than before, and we needed to keep parity.</p>
<p>The idea of using ARKit on iOS to be able to record an AR session was an idea we’d had a while back, but it wasn’t obvious if it was feasible. iOS has generally great media performance, but ARKit was still a relatively closed API. We worked with Sam Piggott on a Swift app that would record and store camera positions. In the early days of ARKit, you were prevented from doing much beyond displaying a 3D scene to the user. In order to make a recording system work, Sam figured out a way to peek into the hidden AVCaptureSession owned by ARKit and set up an AVAssetWriter to stream it out while also running the session. Luckily for us Swift has retained the core aspects of introspection and reflection that were present in Obj-C.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-swift" data-lang="swift"><span class="line"><span class="cl"><span class="kd">private</span> <span class="kd">func</span> <span class="nf">attemptToRetrievePrivateCaptureDevice</span><span class="p">(</span><span class="n">session</span><span class="p">:</span> <span class="n">ARSession</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="kd">let</span> <span class="nv">sensors</span><span class="p">:</span> <span class="n">NSArray</span> <span class="p">=</span> <span class="n">session</span><span class="p">.</span><span class="n">value</span><span class="p">(</span><span class="n">forKey</span><span class="p">:</span> <span class="s">&#34;availableSensors&#34;</span><span class="p">)</span> <span class="k">as</span><span class="p">!</span> <span class="n">NSArray</span>
</span></span><span class="line"><span class="cl">    <span class="kd">let</span> <span class="nv">imageSensorClass</span><span class="p">:</span> <span class="nb">AnyClass</span> <span class="p">=</span> <span class="n">NSClassFromString</span><span class="p">(</span><span class="s">&#34;ARImageSensor&#34;</span><span class="p">)</span><span class="o">!</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">for</span> <span class="n">sensor</span> <span class="k">in</span> <span class="n">sensors</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="bp">debugPrint</span><span class="p">(</span><span class="s">&#34;Checking sensor </span><span class="si">\(</span><span class="n">sensor</span><span class="si">)</span><span class="s">&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">        <span class="kd">let</span> <span class="nv">parentImageSensorClass</span><span class="p">:</span> <span class="nb">AnyClass</span> <span class="p">=</span> <span class="n">NSClassFromString</span><span class="p">(</span><span class="s">&#34;ARParentImageSensor&#34;</span><span class="p">)</span><span class="o">!</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">        <span class="k">guard</span>
</span></span><span class="line"><span class="cl">            <span class="kd">let</span> <span class="nv">sensor</span> <span class="p">=</span> <span class="n">sensor</span> <span class="k">as</span><span class="p">?</span> <span class="n">NSObject</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="n">sensor</span><span class="p">.</span><span class="n">isKind</span><span class="p">(</span><span class="n">of</span><span class="p">:</span> <span class="n">parentImageSensorClass</span><span class="p">)</span> <span class="p">==</span> <span class="kc">true</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="kd">let</span> <span class="nv">captureSession</span><span class="p">:</span> <span class="n">AVCaptureSession</span> <span class="p">=</span> <span class="n">sensor</span><span class="p">.</span><span class="n">value</span><span class="p">(</span><span class="n">forKey</span><span class="p">:</span> <span class="s">&#34;captureSession&#34;</span><span class="p">)</span> <span class="k">as</span><span class="p">?</span> <span class="n">AVCaptureSession</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">            <span class="c1">// The first input should be the &#34;Back&#34; camera</span>
</span></span><span class="line"><span class="cl">            <span class="kd">let</span> <span class="nv">deviceInput</span><span class="p">:</span> <span class="n">AVCaptureDeviceInput</span> <span class="p">=</span> <span class="n">captureSession</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="bp">first</span> <span class="k">as</span><span class="p">?</span> <span class="n">AVCaptureDeviceInput</span> <span class="k">else</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">                <span class="bp">debugPrint</span><span class="p">(</span><span class="s">&#34;Failed to get AVCaptureDeviceInput&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">                <span class="k">return</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">        <span class="n">underlyingCaptureDevice</span> <span class="p">=</span> <span class="n">deviceInput</span><span class="p">.</span><span class="n">device</span>
</span></span><span class="line"><span class="cl">        <span class="n">underlyingCaptureSession</span> <span class="p">=</span> <span class="n">captureSession</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span></span></span></code></pre></div>
<p>This code isn&rsquo;t necessary with more recent versions of iOS. There are also a number of open source projects that do similar things in slightly different ways.</p>
<ul>
<li><a href="https://github.com/AFathi/ARVideoKit" target="_blank" rel="noreferrer">https://github.com/AFathi/ARVideoKit</a></li>
<li><a href="https://github.com/svhawks/SceneKitVideoRecorder" target="_blank" rel="noreferrer">https://github.com/svhawks/SceneKitVideoRecorder</a></li>
</ul>
<p>This workflow was successfully integrated into the entire end-to-end process, and we could produce a working iOS game featuring data recorded on Playdeo Capture. The image quality was obviously not as high as the images from Timo’s high end DSLR cameras, and the ARKit mesh fidelity was lower, but it did allow for prototyping and ad hoc workflows. The original ingest python script was replaced entirely. ffmpeg was now run via Unity. We had successfully demonstrated a workflow which had cut the pieces of software down from 5 to 2.</p>
<p>This project became important during the COVID pandemic in 2020. With all of the studio working from home, producing new video content became impossible. Playdeo Capture became literally the only way we could prototype new ideas, and that’s when Jack started experimenting with Playdeo Makes, episodic video featuring modular tabletop mini games and exercises you play with Avo. Nearly all of this was bootstrapped by Jack alone in his loft, with makeshift cardboard props serving to create a set and series of camera positions.</p>
<p>As lockdown eased, it became possible to have actors on a set, as long as a fairly strict set of guidelines were obeyed. Recent advances in iOS hardware had meant that we could now shoot in 10-bit log at 4k, and it gave a huge uplift to the visual quality, so much so that we needn’t go back to DSLRs. The only restriction was in having fixed cameras, as 10bit mode was incompatible with ARKit recording.</p>
<p>The last innovation that I want to mention is how we arranged our metadata. It was recognised that there was still a role for editing software like Final Cut pro. However, manual logging of this data was slow, awkward and error prone. I rewrote and simplified the way we log clip data, and we standardised on a naming convention of scene, shot, take and camera number.</p>
<p>Because of the original architecture of <strong>Playdeo Capture</strong>, all data was centralised on the iPhone. For a future version we&rsquo;d want this performed on the server, but for now the phone held the data model for clips, and the video data. Any logged information would need to reach the phone somehow.</p>
<p>At the same time Timo was stuck in Norway because of travel restrictions. I set up a small rails website to allow shot logging. Timo was present on set via a Google hangouts call, so was able to see and hear what was happening.  He could sit in Norway and act as Script Editor and Clapper Loader. He would type the scene, shot and camera numbers into the site, and it would calculate the take. Then in Playdeo Capture every time the record button was hit, we could query the website and fetch the appropriate metadata. This would then be baked into the session on the iPhone.</p>
<p>The connection to the Rails site was done with the <a href="https://github.com/nerzh/Action-Cable-Swift" target="_blank" rel="noreferrer">ActionCableSwift</a> Swift package, and the relatively complex connection was managed with a large state machine implemented using <a href="https://github.com/ReactKit/SwiftState" target="_blank" rel="noreferrer">SwiftState</a>. Both of these are excellent packages to work with, and helped encapsulate a lot of complexity that emerged from the period where this remote data solution was hacked up in.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/playdeo_capture_connections.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Shooting on set with virtual presence and metadata connections</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Then when it comes to exporting the data form the iPhone we create a fake FCPXML file that simulates a Final Cut Pro project. This meant that metadata would be preserved as it was passed into and out of Final Cut Pro, and we could retain the correct metadata and position data automatically.</p>
<h2 id="fastlane-and-the-build-pipeline" class="relative group">Fastlane and the build pipeline <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#fastlane-and-the-build-pipeline" aria-label="Anchor">#</a></span></h2><p>Early in our prototyping we implemented a <a href="https://fastlane.tools/" target="_blank" rel="noreferrer">Fastlane</a> pipeline to manage the Xcode modification, build and upload process. When Apple updates Xcode, Unity tends to lag behind upstream changes, and their <em>Trampoline</em> system for producing xcodeproj often ends up causing Xcode to display many deprecation and misconfiguration warnings. Fastlane is a sophisticated project that automates as much of the Apple build pipeline as possible, and allows for relatively straight forward programmatic manipulation of Xcode projects, and allowed me a hook to address these configuration issues.</p>
<p>Amonst other things, we used it for the follwing tasks:</p>
<ul>
<li>Automatically fetch large assets from remote storage</li>
<li>Manage signing identity</li>
<li>Add plugin code to the Xcode project</li>
<li>Automatically set version and build numbers</li>
<li>Provide custom On-Demand Resourcing setup manipulation in Xcode</li>
<li>Process and badge the launch icons to distinguish between build types</li>
<li>Toggle feature flags</li>
<li>Enable/disable the complex analytics code, to reduce dev build times</li>
<li>Auto tag releases in git</li>
<li>Upload builds to Testflight</li>
<li>Post to Slack channels for automatic notifications</li>
</ul>
<p>It really is a <em>Swiss Army Knife</em> for handling any tasks that it might be too complex or difficult to perform in Unity, with the downside of increasing project dependencies. Where possible, we always made Unity capable of producting functional output for local development, but this ended up slipping behind as the project got larger.</p>
<p>I&rsquo;ve made a gist of the <a href="https://gist.github.com/nickludlam/cdb7905ae474044a8fbc74f7f33a9f9b" target="_blank" rel="noreferrer">Fastlane action, Unity C# class and example usage</a>. It&rsquo;s a cut-down version of what we used, but the core essentials are present. It&rsquo;s relatively easy to create your own custom command-line arguments to create a structure for build manipulation.</p>
<p>The rest of the build setup was based on a standard configuration of a CI server. We initially used <a href="https://www.gocd.org" target="_blank" rel="noreferrer">Go CD</a>, and later swapped to <a href="https://www.jenkins.io" target="_blank" rel="noreferrer">Jenkins</a> as others in the team were more familiar with it, with installation and maintenance therefore easier to distribute among more people.</p>
<p>We found Go CD suffered from quirks, like an obscure bug relating to how the servers and clients are built and run as GUI tools from the Dock, leading to a pollution of the environment. <a href="https://github.com/gocd/gocd/issues/5857" target="_blank" rel="noreferrer">In this case</a>, to fix a bizarre error you had to ensure you <code>unset CFProcessPath</code> early on before running Unity. This is an oddity that dates back to the changeover from <a href="https://en.wikipedia.org/wiki/Carbon_%28API%29" target="_blank" rel="noreferrer">Carbon to Cocoa</a> on the Mac, and was a particularly ancient and difficult to pin down issue.</p>
<p>Paying attention to your build process early allows the project grow in complexity without it becoming unwieldy over time. If you have organised structure where you can hang additional code, it helps guide people into solutions and amendments which hold up over the lifespan of the project, and a more predictable place for methods and files. Do ensure that the right people review code changes, however, as build system fragility is not always obvious until the build server breaks!</p>
<h2 id="metasploit" class="relative group">Metasploit <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#metasploit" aria-label="Anchor">#</a></span></h2><p>Although not strictly part of the technology, sometimes you get to contribute to the authenticity of technology&rsquo;s representation in media.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/avosploit_in_action.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The metasploit output in the game</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>This is a geeky easter egg inside Avo, and I&rsquo;m not sure if anyone ever noticed. For the hacking scene, I took the real console output of <a href="https://www.metasploit.com" target="_blank" rel="noreferrer">Metasploit</a> and <a href="https://gist.github.com/nickludlam/e8efcec912540ada7788e91da452f749" target="_blank" rel="noreferrer">rewrote it</a> to be more in-keeping with the Avo universe. The actual activity run inside it was the real <a href="https://github.com/rapid7/metasploit-framework/blob/master/documentation/modules/auxiliary/scanner/ssl/openssl_heartbleed.md" target="_blank" rel="noreferrer">OpenSSL Heartbleed exploit</a> from April 2014.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/avosploit.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The raw Metasploit output</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>The other component was <a href="https://gist.github.com/nickludlam/7602d1e31a066108447c09f717f32f3e" target="_blank" rel="noreferrer">a small Ruby script</a> which allowed any keyboard interaction to trigger ASCII text to be written out to the terminal. This would give Katie an ability to type anything on the keyboard and produce sensible output, giving the impression of an expert as Billie would be.</p>
]]></content:encoded>
      </item>
    
      <item>
        <title>Playdeo Part 2 - Building Avo</title>
        <link>https://nick.recoil.org/work/playdeo-building-avo/</link>
        <guid>https://nick.recoil.org/work/playdeo-building-avo/</guid>
        <pubDate>Thu, 05 May 2022 14:39:48 &#43;0000</pubDate>
        <description>&lt;![CDATA[]]></description>
        <content:encoded>&lt;![CDATA[<p>This post follows on from 
      
    <a href="https://nick.recoil.org/work/playdeo-material-exploration/">part one of the Playdeo story</a>.</p>
<h2 id="alpha-1-jan-2018" class="relative group">Alpha 1, Jan 2018 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#alpha-1-jan-2018" aria-label="Anchor">#</a></span></h2><p>Our previous game prototype <strong>Sharpen</strong> was then followed by a more fleshed out idea codenamed <strong>Tiny Frankenstein</strong>. A controllable pencil sharpener didn&rsquo;t offer enough innate personality, but cemented the idea for inanimate objects imbued with life. We&rsquo;d need to bring more charisma to make a truly great game protagonist that the player would form a bond with.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/fruit_auditions.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">We auditioned a lot of different fruit</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>I do not recall exactly where the idea came from, whether it was Ryan North&rsquo;s story, or Jack and Jon&rsquo;s choice, but out of all the different fruit we photoscanned, we chose the humble avocado, and thus Avo was born. The new script kept the core ideas, but now incorporated a script and narrative, where the fetch quest was given to you through dialogue rather than on-screen text as we had in Sharpen.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/alpha1_set.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The basic Alpha 1 S-shaped desk</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>Above you can see the basic layout for Alpha 1, with a more specific lengthy area for you to walk around. Jack was still playing the role of the scientist who brought our protagonist to life, and the set was made of basic white desks. As we filmed and produced the prototype, it became obvious something was missing.</p>
<p>Mechanically we proved the game idea could work, but this was a rare case where the technology outpaced the film making. It became obvious that we needed to start looking for a professional actor, a more visually interesting location and more lavish props for you to be surrounded by.</p>
<h2 id="alpha-2-feb-2018" class="relative group">Alpha 2, Feb 2018 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#alpha-2-feb-2018" aria-label="Anchor">#</a></span></h2><p>Alpha 2 featured a custom built set and actor <a href="https://www.imdb.com/name/nm6697463/" target="_blank" rel="noreferrer">Katie Reece</a> as our hero inventor Billie. We also saw an early version of Avo with his procedural walk system, custom props, and a much stronger dialogue and narrative. It was an immediate win, with a far more engaging interaction, plus Avo’s own quirky personality was beginning to shine through.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/alpha1vs2.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Jack in Alpha 1, Katie in Alpha 2</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>In an effort to keep development simple and flexible, our protagonist Avo was silent, taking influence from Wallace and Gromit. Since we now had dialogue and a story, we needed to edit and grade footage in Final Cut Pro before feeding it into Unity. This was a huge step for us, as it was the first time we had meaningfully used an audio track from the video to convey information to the player.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex md:px-24 justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/alpha_2.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/alpha_2.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Avo started out life with just his legs in Alpha 2</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>Alpha 2 got more and more polished and began to generate its own gravity. We added flourishes like the collectables, which helped guide players in where they needed to take Avo. It felt exciting, fun, full of heart, and was finally something we could scale up into a full game.</p>
<p><a href="http://ryannorth.ca" target="_blank" rel="noreferrer">Ryan North</a> and <a href="http://gemmaarrowsmith.com" target="_blank" rel="noreferrer">Gemma Arrowsmith</a> helped us create a fun and engaging story. It ended up being wildly ambitious and needed scaling back, with one or two chapters going unfilmed, but the bones of it were there. What followed was location scouting, set building, more props, full script development, table reads and all of the usual aspects of a full TV production, except done on a tiny budget, and very much in the guerilla, forgiveness-not-permission, school of filmmaking. This was all managed by the talented <a href="https://www.linkedin.com/in/lotta-boman-79b7a255" target="_blank" rel="noreferrer">Lotta Boman</a> who excelled under these resource constrained circumstances.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/sequencer_and_checkpoint_tools.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Alpha 2 Sequencer and Checkpoints tools</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>While Alpha 2 could be considered a success, many of the systems used to pull it together were not robust or scalable. This included aspects of narrative construction like the Sequencer and our save/load system based around checkpoints, as shown in the screenshot above. If we were to require a full game with many levels, our tools needed to become far more capable, and streamline many of the repetitive tasks involved in their construction.</p>
<h2 id="the-shoot-may-2018" class="relative group">The Shoot, May 2018 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-shoot-may-2018" aria-label="Anchor">#</a></span></h2><p>We’d now finished the invention phase, and tipped into production, polish and delivery. Filming started in May 2018, and continued for approximately 10 weeks. The studio was divided in two for the duration of the shoot, half on set, and half back in the studio preparing for a tranche of video to suddenly turn up, feverishly working on mesh alignment tools, level creation tools, the video processing pipeline, a data-binding UI system, integration with the <a href="https://www.audiokinetic.com/en/products/wwise" target="_blank" rel="noreferrer">Wwise audio system</a>, and a whole host of other elements of technical debt.</p>
<p>I was split across a number of different areas, as well as providing overall technical leadership for the team. By far the most urgent task was supporting our data pipeline. We were suddenly generating a huge amount of video footage, more than anything else we’d previously experienced, and it all needed editing and processing. We swapped over to using Black Magic’s <a href="https://www.blackmagicdesign.com/products/davinciresolve" target="_blank" rel="noreferrer">DaVinci Resolve</a> rather than Final Cut Pro. Tracking was still being done in <a href="https://www.thepixelfarm.co.uk/pftrack/" target="_blank" rel="noreferrer">PFTrack</a>, and the photogrammetry in <a href="https://www.capturingreality.com" target="_blank" rel="noreferrer">Reality Capture</a>.</p>
<p>Timo and I defined a new data pipeline that correlated PFTrack exports, DaVinci’s exported FCPXML timelines, CSV metadata, and the video files to form a sufficiently robust mix that would cross-reference frame counts, filenames and a host of other data to spot and highlight any human error that crept in. As we were doing ten times as much data processing as before and involving people new to the process, this was incredibly important. Bad data that makes its way into a test build needs to be easily distinguishable from errors in the code itself, as both were changing frequently, and it’s very costly to involve the whole team in diagnosing build issues.</p>
<h2 id="the-sprint-september-2018" class="relative group">The Sprint, September 2018 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-sprint-september-2018" aria-label="Anchor">#</a></span></h2><p>From September 2018 to January 2019 we implemented the eight episodes in the main game. We added subtitle support, full music and sound effects, bluetooth audio support, the save checkpoint system, localisation, analytics, general UI, IAP integration, On-Demand Resource support, AR mode, low and high resolution videos, and a whole host of other things. We had no specific producer, so our weekly planning meetings were crucial for establishing bottlenecks, and towards the end I was generally responsible for keeping the flow of work steady, as it became more and more technical. It was a remarkably intense time, and for the most part highly productive.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/repo_commits.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Our git commit graph. The dotted line is our launch day</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>As well as the data pipeline, I also worked on a whole host of game features. The video playback system needed constant improvement as we attempted more ambitious edits, transitions and audio layering. I also worked on any areas where we integrated with native iOS functions. This included In-App Purchase integration, <a href="https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/On_Demand_Resources_Guide/index.html" target="_blank" rel="noreferrer">On-Demand Resource</a> fetching and optimisation for less capable iPhones and iPads.</p>
<p>Just before launch I managed to implement the PathEDL system, which was a way of mitigating the lag experienced when playing the game with Bluetooth audio. This was becoming increasingly common with the launch of <a href="https://en.wikipedia.org/wiki/AirPods" target="_blank" rel="noreferrer">AirPods</a> at the end of 2016, and became crucial to maintaining a good feel for Avo&rsquo;s line walking, as we had to increase his walk speed to counter the larger set we ended up building.</p>
<h2 id="the-launch-jan-2019" class="relative group">The launch, Jan 2019 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-launch-jan-2019" aria-label="Anchor">#</a></span></h2><p>We finally launched Avo at the end of January 2019. It has gone on to have nearly four million downloads and is regularly promoted in the App Store at the time of writing this in 2022. For a new studio&rsquo;s first title in a new medium, in I consider those numbers a huge success. While it may seem coherent and polished from the outside, it really was a hard won product born from three years of inventive exploration in a brand new medium.</p>
<p>It also had some nice media attention, from <a href="https://www.youtube.com/watch?v=zAmOsGKBYWk" target="_blank" rel="noreferrer">this review by Leo Laporte on TWiT</a> to <a href="https://medium.com/a-chair-in-a-room/avo-stadia-arcade-bandersnatch-and-the-new-grammar-of-television-and-games-part-1-form-is-cf5188d142b1" target="_blank" rel="noreferrer">Dan Hill&rsquo;s a very in-depth dive into the medium as a whole</a>. One of my personal favourites was <a href="https://www.youtube.com/watch?v=_kmeb7zURNs" target="_blank" rel="noreferrer">a video review and skit from FGTeeV</a>, made using the AR video recording mode I worked on in the game.</p>
<p>As with all startups, you end up wearing many hats, but this broke the record for me:</p>
<ul>
<li><strong>Lead engineer</strong> - initial architecture, setting team goals, 3rd party integration</li>
<li><strong>Data pipeline work</strong> - video data, tracking data, subtitles, analytics</li>
<li><strong>Build dev</strong> - Fastlane and the CI server</li>
<li><strong>Video specialist</strong> - native video playback plugin</li>
<li><strong>Game feel and optimisation dev</strong> - developing PathEDL and overall improvements to game input</li>
<li><strong>Asset control and code versioning work</strong> - scripting use of the NAS server and main git wrangler</li>
<li><strong>Overall tech team leader</strong> - hiring, whiteboarding, troubleshooting, reviews</li>
</ul>
<h2 id="the-making-of-video-march-2019" class="relative group">The &lsquo;Making of&rsquo; video, March 2019 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-making-of-video-march-2019" aria-label="Anchor">#</a></span></h2><p>Timo and Jack organised a <a href="https://www.youtube.com/watch?v=Za4zDlRYrZ4" target="_blank" rel="noreferrer">behind the scenes video of Avo&rsquo;s production</a>. I&rsquo;ve also uploaded a copy to our instance of <a href="https://crank.recoil.org/" target="_blank" rel="noreferrer">PeerTube running on recoil.org</a>. It&rsquo;s a lovely encapsulation of the studio and its work at that time.</p>
<div class="relative h-0 overflow-hidden max-w-full w-full" style="padding-bottom: 56.25%">
  <iframe class="absolute top-0 left-0 w-full h-full" sandbox="allow-same-origin allow-scripts allow-popups" src="https://crank.recoil.org/videos/embed/8cbf6ffc-89ff-4cb6-bdd5-9b9525f8c318" frameborder="0" allowfullscreen></iframe>
</div>
<p>There were a large number of people involved in <strong>Avo</strong> who haven&rsquo;t got a mention here, as this is told from my personal perspective, and I&rsquo;ve cherry picked the most interesting aspects of what we did for the purposes of brevity. The <a href="https://www.imdb.com/title/tt9543952/fullcredits/" target="_blank" rel="noreferrer">IMDB entry for Avo</a> and <a href="https://www.mobygames.com/game/avo" target="_blank" rel="noreferrer">Moby Games page</a> have a complete list of the cast and crew.</p>
<p>The last article contains some more of the technical aspects of the work. Read 
      
    <a href="https://nick.recoil.org/work/playdeo-technology/">part three of the Playdeo story</a>.</p>
]]></content:encoded>
      </item>
    
      <item>
        <title>Playdeo Part 1 - Material Exploration</title>
        <link>https://nick.recoil.org/work/playdeo-material-exploration/</link>
        <guid>https://nick.recoil.org/work/playdeo-material-exploration/</guid>
        <pubDate>Tue, 03 May 2022 13:23:20 BST</pubDate>
        <description>&lt;![CDATA[]]></description>
        <content:encoded>&lt;![CDATA[<p>Back in 2016, Jack Schulze, Timo Armall and I set up Playdeo; a studio to develop an original idea of games and interactive experiences using full screen video.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/playdeo_crew.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">The studio just after the launch of our first game <strong>Avo</strong></figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>This was the first time any of us had been directly involved with attempting to build a game, although we’d successfully worked with each other at the design studio <a href="http://berglondon.com/" target="_blank" rel="noreferrer">BERG</a> on a series of projects involving complex design and cutting edge tech. Jack brought his unique vision and design sense, Timo added his in-depth camera wrangling and cinematic skills, and I had direct experience in native mobile apps, video, and a history working in visual effects houses.</p>
<p>This article covers the period of time from the very start of our idea through to the end of the material exploration phase, where we tipped into producing our first game, <a href="https://apps.apple.com/gb/app/avo/id1452511688" target="_blank" rel="noreferrer">Avo</a>.</p>
<h2 id="the-spark-2015" class="relative group">The spark, 2015 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-spark-2015" aria-label="Anchor">#</a></span></h2><p>Jack started thinking about the possibilities of touchable video after seeing his young kids reach out and touch the screen of their iPads while watching a movie in 2015. Everything else you do with an iPad involves screen interaction, and in their minds video shouldn’t be any different, so of course it would be touchable! Instead, all they saw were the playback controls appearing. This made Jack wonder how true touch interaction with the video picture might actually work. He started making a prototype, working with <a href="https://gregborenstein.com/" target="_blank" rel="noreferrer">Greg Borenstein</a> to get things off the ground.</p>
<p>Some time later, Jack invited me for a coffee and showed me his first working prototype running on a laptop. It integrated three key elements; the full screen video, a 3D scene and camera motion tracking. This prototype just about held together. There was no audio, the file sizes were huge, playback wasn’t smooth, rendering was inefficient; the list went on. However, in spite of all that, there was something undeniably magical about it. You could render 3D objects into the video with rock solid believability, plus you could interact with anything on the screen. It worked better than the performance of AR at the time, and was under the player’s control, unlike something pre-rendered.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/video_with_mesh.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">An early prototype showing the video with underlying 3D mesh</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>All the core elements were present, but to unlock a viable future for the tech, we knew we had to get a prototype working on mobile devices. This would immediately derisk the project by a huge amount, and since I’d spent time developing native iOS apps before, Jack wanted to know if this was something I’d be interested in working on. There was clearly enough here to show the idea’s enormous potential, so I eagerly hopped on board.</p>
<p>The first thing I worked on was video playback. There’s no way you could ship a product with JPEG flip-books, so we’d need a way of decoding video frames in real time. After three weeks of work, starting in October, I had a very crude iOS prototype working where we could decode true MP4 video frames, and pass that into Unity as a standard texture. Crucially, this also included frame timing information, so we could look up accompanying camera position data when rendering the video frames.</p>
<p>This gave us the confidence to begin scaling up our work, and start working with someone who knew the Unity engine well.</p>
<h2 id="tooling-up-2015-2016" class="relative group">Tooling up, 2015-2016 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#tooling-up-2015-2016" aria-label="Anchor">#</a></span></h2><p><a href="http://ludopathic.co.uk/PressKit/index.php" target="_blank" rel="noreferrer">Aubrey Hesselgren</a> joined us over the winter to work on the very first prototypes, bringing his Unity expertise to the team, and helping with many of the core animation and rendering tasks. Our prototypes were crude, and relied on very ad-hoc processes of ingest and data manipulation, but each one gave us insight into what looked good, and what gameplay opportunities were available.</p>
<p>We had to work blind in the Unity Editor, as video decoding only worked on mobile devices. We’d frequently encounter issues such as playback synchronisation problems, involving a lot of trial and error debugging. We did our debugging with high speed video of the phone screens to figure out timing issues. It was crude, but nothing existed within Unity’s standard tools that was geared up to debug this sort of process, so we had to roll everything ourselves. Progress during this period was slow and frustrating.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/slowmo_debug.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/slowmo_debug.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Slow motion debugging video frames</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>As time went on, I began to bring formality and automation to our data pipeline and build process. The ingest process consisted of an ever growing Python script that formed the spine of multi stage data wrangling. Because we wanted to have near instantaneous access to any video frame, we used ffmpeg to concatenate all the videos together into a single seekable file. We started to use Autodesk’s FBX Python library to allow us to programmatically manipulate keyframe data from the camera track, rather than relying on Unity’s systems (which always wanted to smooth this motion out).</p>
<p>For the build process we used Fastlane. I disliked Unity’s automatically generated Xcode projects, as it would frequently be out of step with iOS releases, and I wanted a way to manipulate the generated project files independently. I’d seen Fastlane put to excellent work by <a href="https://www.tomtaylor.co.uk/" target="_blank" rel="noreferrer">Tom Taylor</a> in a previous job, and knew it represented the perfect Swiss Army knife for manipulating, building and distributing our prototypes.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/orange_car_1.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/orange_car_1.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Hey presto, our little car appears</figcaption>
    
</figure>
</div>

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/orange_car_2.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/orange_car_2.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">The headlights can relight the video</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>After a few months, we produced the <strong>Orange Car</strong> demo, some clips from which you can see in the images above. This was our most polished demo to date. It started with Jack speaking to camera, explaining what was going to happen, then opening a box. Out popped a little orange car which the player could control. The car headlights could even dynamically illuminate the objects on the table.</p>
<h2 id="investment-2016" class="relative group">Investment, 2016 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#investment-2016" aria-label="Anchor">#</a></span></h2><p>Armed with the <strong>Orange Car</strong> demo, we formed a company. Jack, Timo and I felt the demo showed the amazing potential of what we could make. The idea was easy to understand and, more importantly, it ran directly on a phone. Off the strength of the demo, we sought some initial investment to start scaling our work up. Chris Lee joined us as a fourth founder, bringing his important games industry knowledge.</p>
<p>We settled into a co-working space in Whitechapel, East London. Having spent much of the previous time camped out in Timo’s mother’s front room, with three or four of us packed in like sardines, it was a welcome step. It was a noisy and hectic space, and inexplicably our tele-sales entrepreneur neighbours would always love having their loudest conversations just outside our door. On the plus side we had one of the most important pieces of equipment, <strong>a huge whiteboard!</strong> I believe it’s the intellectual and spiritual hearth for people doing collaborative, inventive work.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/whiteboard_montage.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Whiteboarding is an amazing way to communicate ideas</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>In September 2016 one of our developers <a href="https://yeray.dev/" target="_blank" rel="noreferrer">Yera Diaz</a> completed one of the most important pieces of early work, a video plugin for the Unity Editor. We would finally be able to prototype by simply hitting the Editor&rsquo;s <em>Play</em> button rather than requiring a whole iOS build to be made before we could see anything working. This would transform the experience of exploring this new medium, and accelerate the pace of our work enormously.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class=" rounded-md" src="/assets/img/playdeo/video_corruption.jpeg"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">Plenty of time was spent debugging video playback issues like this</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>In October 2016, we worked with <a href="https://www.glowmade.com/" target="_blank" rel="noreferrer">Jonny Hopper and Mike Green from Glowmade</a> to smarten up some of the core code, and to start thinking about gameplay and interaction. We experimented with a platform game, and at this stage were still very much treating the phone like a TV, with landscape orientation and virtual joysticks for control. We were starting to mould the codebase into a space where we could experiment, and to derive a predictable and quick pipeline.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/torchlight.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/torchlight.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Torchlight experiment with dynamic match-cuts</figcaption>
    
</figure>
</div>

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/clean_up_my_mess.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/clean_up_my_mess.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Clean Up My Mess experimented with physics</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/kerbside.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/kerbside.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Kerbside demo exploring dynamic lights</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>Soon after, we started working on long string of unique prototypes. <strong>Day/Night</strong>, <strong>Time Travelling</strong>, <strong>Kerbside</strong>, <strong>Physics Toy</strong>, <strong>Clean Up My Mess</strong>. These helped bring forward the idea that you were actually playing in video, not just watching it. While each prototype was its own separate concept, all of them explored touch interactions in various ways. Should the phone be horizontal or vertical? Was a virtual joystick really the best way for players to interact? How do you receive feedback?</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/dragging_into_the_world.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/dragging_into_the_world.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Exploring touch interaction systems</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>For some of our experiments, we needed a way of transitioning from a 2D touch to a 3D drag, as can be seen in the images above. Although we made it work, it wasn’t as intuitive as we’d have liked, and we did not find any gameplay interactions that made it feel satisfying, so we kept moving forward with new prototypes.</p>
<h2 id="new-offices-and-a-larger-team-2017" class="relative group">New offices and a larger team, 2017 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#new-offices-and-a-larger-team-2017" aria-label="Anchor">#</a></span></h2><p>At the start of 2017 we moved to <a href="https://thetrampery.com/workspaces/republic/" target="_blank" rel="noreferrer">The Trampery Republic</a>, a workspace in East India Docks in East London, and started to scale up our headcount. This was an exciting moment, but it also put pressure on everybody. We lacked sophisticated editor tools, so all design work had to be through sheer imagination first and foremost. It was particularly tough as we were working with a new medium that lacked a back catalogue of reference material. The traditional tools such as game design documents didn’t work, as there were so many technical limitations, and no obvious genre to aim for. This was probably the most difficult time, with far more experimental failures than successes.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/proto_tolla.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/proto_tolla.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">The Night Garden prototype with Tolla</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>By April 2017 we started working with <a href="https://supersplinestudios.com/" target="_blank" rel="noreferrer">Super Spline Studios</a> and <a href="https://x.com/shanazbyrne" target="_blank" rel="noreferrer">Shanaz Byrne</a> on a project we called <strong>Night Garden</strong>, featuring a character called Tolla (seen in the image above). It was our first time experimenting with humanoid animations, inverse kinematics, enemies and a whole slew of other features.</p>
<p>It was an on-rails runner, with some (but limited) control over where the character was positioned, and a single long take of a camera’s path through a garden environment. Ultimately we didn’t take it forward as we felt there was not sufficiently diverse gameplay or replayability, but we were slowly improving our capabilities and ambitions. This was definitely the closest we’d come to an actual gameplay loop.</p>
<p>It’s at this point we’d fully committed to the vertical orientation as our preferred way of holding the phone, and using a single finger for most interactions. It was the right balance between interaction, comfort and screen visibility. This was before the rise of Tiktok, YouTube stories or any other large scale proof that vertical video would be accepted by our audience.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/cafe_racer.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/cafe_racer.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Racing prototype with camera cuts to reveal more of the world</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>In August 2017 we briefly went back to the idea of controlling small cars on a race track, similar to the original Orange Car demo, but now shot vertically with single finger steering. This featured a small but crucial new facility for us; cutting to a different camera as you approached the edge of the screen. Although this brief exploration of racing wasn’t taken forward, the idea of cutting between cameras would stay, allowing the player to explore the 3D space under their own control.</p>
<p>This prompted all kinds of interesting questions about continuity and <em>video time</em> vs <em>game time</em>. That is, the player experiences each use of a video clip in a strictly linear fashion, so we had to be careful with anything that visually changed the world. For example, if we showed someone putting down a cup of coffee, then each subsequent clip we used had to show the coffee cup on the table, or we needed to show a clip of someone picking it back up. We learned that video clips that were designed for reuse must try to refrain from changing the set in any way. This was a big progression in understanding which impacted on our later work.</p>
<h2 id="game-prototypes-2017" class="relative group">Game prototypes, 2017 <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#game-prototypes-2017" aria-label="Anchor">#</a></span></h2><p>The next important step in our prototyping was made by <a href="https://jonathantopf.com" target="_blank" rel="noreferrer">Jonathan Topf</a>. Because of his work on a previous game he created, Trickshot, he had a good feel for players using a touch screen as the primary control system. His insight was to allow players to draw an intended path of movement for the character, rather than manipulating indirect controls like a touchpad or virtual joystick. I remember being really impressed at the time, and knowing this method of input was right for our game.</p>








<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-2 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/apple_table.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/apple_table.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Jon&#39;s line drawing prototype</figcaption>
    
</figure>
</div>

<div class="rounded-md flex items-center justify-center">
  <figure>
    <picture>
      <img class="border rounded-md" src="/assets/img/playdeo/drawing_line_commit.png"
        alt="" style="margin: 0" />
      
      <figcaption class="text-center">It was obviously a breakthrough at the time</figcaption>
      
    </picture>
  </figure>
</div>

    </div>
  </div>
</div>
<p>It’s a fantastic mechanic for player control on mobile, but it wouldn’t have made as much sense without the ability to change camera angles when approaching different regions of the table. In creative companies doing good work, breakthroughs should happen all the time, in lots of different areas. The key is to communicate these breakthroughs, and allow new possibilities to be unlocked at all points up and down the technology stack.</p>
<p>With the new draw-to-move mechanic, we made our next prototype with it in mind. As lines were easiest to draw on large, flat planes, setting the game on a tabletop would allow for easy filming, good opportunities for interactions between human and virtual characters, and easy player navigability. We were already drawn to the idea of small worlds like <a href="https://en.wikipedia.org/wiki/The_Borrowers" target="_blank" rel="noreferrer">The Borrowers</a>, but we knew fully rigged and animated characters like Tolla were too complex for us to manage internally, so we settled on a more <em><a href="https://en.wikipedia.org/wiki/Y%c5%8dkai" target="_blank" rel="noreferrer">Yokai</a></em> idea of inanimate objects brought to life. This led to Sharpen (shown below), in November 2017.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center md:px-24">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/sharpen_example.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/sharpen_example.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">The Sharpen prototype</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>Players needed to find some items strewn around the play area, giving us our very first taste of tasks in the form of these fetch quests. It also allowed us to get a better feel for drawing lines to navigate, cutting between different cameras, each of which would need to carefully frame the playspace. We also explored the idea of cinematic cameras, which were visually interesting, but did not necessarily work well with touch interaction. We were finding the right balancing act between interactivity and narrative, and this led to us distinctly naming these shots as either Coverage or Sequences. Coverage was used for line drawing, while Sequences were used to tell the story.</p>
<p>We also shot it with a very shallow depth of field, and this taught us an important lesson. The more extreme the DOF effect, the less stable the motion track as a result, meaning some video clips in the demo were very low quality as a result. If we were going to make a lot of these camera angles work, we needed to be far more modest with the focal plane depth and position.</p>
<p>When we had a retrospective on <strong>Sharpen</strong>, it was clear we had all the core components in place for our first game. These components were: small tabletop worlds, mixed human and virtual character interaction, line drawing, multiple camera angles and basic fetch quests. It was now time to commit to our first game release, but we needed a more charismatic main character &hellip;</p>
<p>Continued in 
      
    <a href="https://nick.recoil.org/work/playdeo-building-avo/">part two of the Playdeo story</a>.</p>
]]></content:encoded>
      </item>
    
      <item>
        <title>Playdeo</title>
        <link>https://nick.recoil.org/work/playdeo/</link>
        <guid>https://nick.recoil.org/work/playdeo/</guid>
        <pubDate>Wed, 30 Mar 2022 22:18:02 UTC</pubDate>
        <description>&lt;![CDATA[]]></description>
        <content:encoded>&lt;![CDATA[<p>Playdeo was a mobile games studio set up by Jack Schulze, Timo Arnall and myself in 2016 to explore the possibilities of full screen video married with a 3D game engine.</p>
<p>This union was very novel, with almost no precendent for the kinds of interaction opportunities it offered. In situations like these, the work ahead would require a lot of <a href="http://berglondon.com/blog/2005/12/12/material-explorations/" target="_blank" rel="noreferrer">material exploration, and thinking through making</a>. Iteration is key, and in this article I&rsquo;ll be show some examples of the various prototypes we built, a rough timeline, and the key milestones we reached which finally unlocked our first fully shipped game, Avo.</p>
<p>In writing this piece, there are many aspects of this work which have only become apparent to me in hindsight. Material exploration can be a focused and somewhat myopic state to work in. You&rsquo;ve not yet got any perspective, as there&rsquo;s no whole form from which to stand back and assess. You set out to create user-centric and experiential milestones, and assemble a rough technical infrastructure to support it. You want to get to an end-to-end build by taking as many shortcuts as possible.</p>
<p>As humans, we love thinking about these cumulative periods of iterative work in terms of <em>Eureka!</em> moments, or talk of having overnight successes. I was guilty of thinking exactly this, with one key piece of work standing out in my memory of the work as <strong>THE</strong> moment it all came together, that being the interaction method of line drawing to move your character that Jon Topf came up with.</p>
<p>The truth is that there were numerous key moments spread over years, each usually providing a solid layer on which other work could sit. Sometimes these would be obvious, and sometimes they are more subtle, but each layer created new possibilities. In great teams, everyone contributes to this gradual layering process.</p>
<p>From your own perspective, there&rsquo;s also a natual tendancy to discount your own work as less important than others, but that&rsquo;s your own bias talking. What you&rsquo;re working on is obviously not going to come as some sort of pleasant surprise. It&rsquo;s also important to be humble and let work speak for itself. From an outsider&rsquo;s perspective there will be many contributions from all corners that unlock a complex finished product, each important in its own right. Letting in a culture of rock star contributions is neither accurate nor healthy for long term teamwork.</p>
<p>So here we&rsquo;ll see how we worked our way from a smoke-and-mirrors demo on a laptop through to a shipped iOS App Store game with 4 million downloads.</p>
<h2 id="the-spark" class="relative group">The Spark <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-spark" aria-label="Anchor">#</a></span></h2><p>Back in 2015 Jack had been thinking about the possibilities of touchable video when he saw his young daughters reaching out and touching the screen on their iPad while watching a movie. Everything else you do with an iPad involves screen interaction, and in their minds video shouldn&rsquo;t be any different. This make Jack wonder how true touch interaction with the video picture might actually work, and he set to work making a prototype. This was before my direct involvement, and I think he worked with <a href="https://gregborenstein.com" target="_blank" rel="noreferrer">Greg Borenstein</a> to get things off the ground, having previously worked successfully with Greg at BERG.</p>
<p>Some time later, Jack invited me for a coffee and showed me his first working prototype, running on a laptop, that integrated 3 key elements together; the video, a 3D scene and camera motion tracking. The prototype barely held together. There was no audio, the file sizes were huge, playback wasn&rsquo;t smooth, rendering was inefficient, the list went on. However, in spite of all that, there was something undeniably magical about it. You could render 3D objects into the video with rock solid believability, and you could interact contextually with anything on the screen. Better than the performance of AR at the time, and under the player&rsquo;s control, unlike something pre-rendered.</p>
<p>






  
  
<figure><img src="/assets/img/playdeo/video_with_mesh.jpeg" alt="A later prototype showing the visible depth mesh" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>The core elements were present, but to truly unlock a viable future for the tech, we knew we had to get a prototype working on mobile devices. This would immediately derisk the project by a huge amount, and since I&rsquo;d spent time developing native iOS apps before, Jack wanted to know if this was something I&rsquo;d be interested in working on. There was clearly enough here to show the enormous potential, so I eagerly hopped on board.</p>
<p>The first thing I worked on was video playback. There&rsquo;s no way you could ship a product with JPEG flip-books, so we&rsquo;d need a way of decoding video frames in realtime. After about 3 weeks of work in October 2015, I had a very crude iOS prototype working where we could decode true MP4 video frames, and pass that into Unity as a standard texture. Crucially, this also included discrete frame numbers, so we could look up the accompanying camera metadata when rendering the video frames. This gave us the confidence to start scaling up our work, and start working with someone who really knew the Unity engine. While I&rsquo;d picked up just enough to get this small breakthrough working, Unity is a vast system, and we needed someone who was comfortable sketching and developing with it.</p>
<p>Throughout late 2015 and early 2016 <a href="http://ludopathic.co.uk/PressKit/index.php" target="_blank" rel="noreferrer">Aubrey Hesselgren</a> joined us on the very first prototypes. These were crude, and relied on a very ad-hoc processes of ingest and data manipulation, but we began to get a feel for the challenges ahead. Builds of our test code would only work on actual phones rather than inside the Unity editor. Building and deploying was slow, and overall iteration time was painful. We&rsquo;d frequently have issues like playback synchronisation problems which involved a lot of trial and error debugging. To diagnose timing issues, we burned in timecode into the video to be able to check that the actual frame being displayed was the same as the number that was given to Unity by the plugin. Because this system had to run at a faultless 60 frames per second, we often used the ultra slow-motion video recording facility in iOS to check our synchronisation. Looking back now, I&rsquo;m struck by how crude our debugging systems were, but back then we were at the cutting edge, and anything we needed we had to make ourselves. No part of the existing Unity ecosystem was geared up to help us with our unique problems.</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="50%">  
          <source src="/assets/video/playdeo/slowmo_debug.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/slowmo_debug.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>As time went on, I began to bring formality and automation to our data pipeline and build process. The ingest process consisted of an ever growing Python script that formed the spine of multi stage data wrangling. Because we wanted to have near instantaneous access to any video frame, we used ffmpeg to concatenate all the videos together into a single seekable file. We used Autodesk&rsquo;s FBX Python library to allow us to programatically get keyframe data from the camera track, rather than relying on Unity&rsquo;s systems which always wanted to smooth this motion out. The script also started to enforce naming conventions to tie all the disparate elements together automatically, and it attempted to identify and reject human error in the upstream processes, and prevent bad data from entering builds. This would help reduce overall debugging time, even if it looked overly fussy from the video post-production team&rsquo;s point of view.</p>
<p>For the build process we used Fastlane. I disliked Unity&rsquo;s automatically generated Xcode projects, as it would frequently be out of step with iOS releases, and I wanted a way to manipulate the generated project files independently. I&rsquo;d seen Fastlane put to excellent work by <a href="https://www.tomtaylor.co.uk/" target="_blank" rel="noreferrer">Tom Taylor</a> in a previous job, and knew it represented the perfect Swiss Army Knife for manipulating, building and distributing our prototypes.</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-2 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="100%">  
          <source src="/assets/video/playdeo/orange_car_1.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/orange_car_1.webm" type="video/webm" />
        </video>
      </div>
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="100%">  
          <source src="/assets/video/playdeo/orange_car_2.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/orange_car_2.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>As we got more comfortable in working with this new medium, we focused on produced the <strong>Orange Car Demo</strong>, shown above. We felt this demonstrated the amazing potential of what we could make in an easily digestable way, and more importantly it ran directly on your phone. Off the strength of this demo, we sought some initial investment to start scaling Playdeo up to a full company. Games industry mogul Chris Lee joined us as a 4th founder, also unlocking the inner mysteries of the game industry, as Jack, Timo and I had not previously worked in this medium. We settled into a co-working space in Whitechappel, East London. Having spent much of the previous time camped out in Timo&rsquo;s mother&rsquo;s front room, 3 or 4 of us packed in like sardines, it was a welcome step. It was noisy and hectic, and inexplicably the tele-sales entrepreneurs would always love having their loudest conversations just outside our door. On the plus side we had one of the most important pieces of equipment, a huge whiteboard. I really think it&rsquo;s the intellectual and spiritual hearth for people doing collaborative, inventive work.</p>




<div class='mx-auto'>
  <div class="mx-auto px-2 my-1">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2">
      <div class="flex justify-center rounded-sm">
        <figure>
          <picture>
            <img class="100% rounded-md" src="%25!s%28int=1%29" />
            <figcaption>100%</figcaption>
          </picture>
        </figure>
      </div>
    </div>
  </div>
</div>
<p>By September 2016 we were working with <a href="https://yeray.dev/" target="_blank" rel="noreferrer">Yera Diaz</a> to make a Unity editor plugin for video playback. We would finally be able to prototype by simply hitting the play button on our laptops rather than requiring a whole iOS build to be made before we could see anything working. This would transform the experience of exploring this new medium, and accelerate our progress towards our first game.</p>




<div class='mx-auto'>
  <div class="mx-auto px-2 my-1">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2">
      <div class="flex justify-center rounded-sm">
        <figure>
          <picture>
            <img class="100% rounded-md" src="%25!s%28int=1%29" />
            <figcaption>100%</figcaption>
          </picture>
        </figure>
      </div>
    </div>
  </div>
</div>
<p>In October 2016, <a href="https://www.glowmade.com/" target="_blank" rel="noreferrer">Jonny Hopper and Mike Green from Glowmade</a> briefly joined us to smarten up some of the core code, and to start thinking about gameplay and interaction. We experimented with a platform game, and at that stage we were still very much treating the phone like a TV. Landscape orientation, with virtual joysticks for control. We were starting to mould the codebase into something where we could truely experiment, and to derive a predictable and quick pipeline.</p>
<p>


<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="75%">  
          <source src="/assets/video/playdeo/kerbside.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/kerbside.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-2 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="100%">  
          <source src="/assets/video/playdeo/torchlight.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/torchlight.webm" type="video/webm" />
        </video>
      </div>
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="100%">  
          <source src="/assets/video/playdeo/clean_up_my_mess.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/clean_up_my_mess.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div></p>
<p>Quickly after, we started working on a number of prototypes. <strong>Day/Night</strong>, <strong>Time Travelling</strong>, <strong>Kerbside</strong>, <strong>Physics Toy</strong>, <strong>Clean Up My Mess</strong>. All of these helped bring forward the idea that you were playing in video, not just watching it. While each one was its own separate prototype concept, all of them explored touch interactions in various ways. Should the phone be horizontal or vertical? Was a virtual joystick really the best way for players to interact? How do you recieve feedback?</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="75%">  
          <source src="/assets/video/playdeo/dragging_into_the_world.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/dragging_into_the_world.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>For some of our experiments, we needed a way of transitioning from a 2D touch to a 3D drag, as can be seen above. Although <em>Avo</em> never shipped with any draggable UI elements, we did need to blend our 2D and 3D touch thinking for the line drawing mechanic.</p>
<h2 id="new-offices-new-people" class="relative group">New offices, new people <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#new-offices-new-people" aria-label="Anchor">#</a></span></h2><p>At the start of 2017 we moved to <a href="https://thetrampery.com/workspaces/republic/" target="_blank" rel="noreferrer">The Trampery Republic</a>, a workspace in East India Dock in East London, and started to scale up our headcount. This was exciting but also put pressure on everybody, as our tools were still very much at an early stage. If you couldn&rsquo;t program in Unity yourself, it was tough to achieve anything technically. We lacked sophisticated editor tools, so all design work would have to be through sheer imagination first and foremost, and this is particularly tough when working with a new medium that lacked a back catalog of reference material.</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="60%">  
          <source src="/assets/video/playdeo/proto_tolla.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/proto_tolla.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>In April 2017 we started working with <a href="https://supersplinestudios.com/" target="_blank" rel="noreferrer">Super Spline Studios</a> and <a href="https://x.com/shanazbyrne" target="_blank" rel="noreferrer">Shanaz Byrne</a> on a project we called <strong>Night Garden</strong>, featuring a character called <em>Tolla</em> seen above. It was our first time experimenting with humanoid animations, inverse kinematics, enemies and a whole slew of other features. It was an on-rails runner featuring a character called Tolla, with some but limited control over where the character was positioned, and a single long take video of Timo&rsquo;s mum&rsquo;s garden. Ultimately we didn&rsquo;t take it forward as we felt there was not sufficiently diverse gameplay or replayability, but we were slowly improving our capabilities and ambitions. It&rsquo;s at this point that we&rsquo;d fully committed to the vertical orientation as our preferred way of holding the phone, and using a single finger for most interactions. It was the right balance between interactions, comfort and screen visibility. Bear in mind this was before Tiktok, YouTube Stories or any other large scale proof that vertical video would be accepted by our audience.</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="60%">  
          <source src="/assets/video/playdeo/cafe_racer.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/cafe_racer.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>In August 2017 we briefly went back to the idea of controlling small cars on a race track, similar to the original orange car demo, but now shot vertically with the single finger interaction. This featured a small but crucial new facility for us, cutting to a different camera as you approached the edge of the screen. Although this brief exploration of racing wasn&rsquo;t taken forward, the idea of cutting between cameras would stay, allowing the player to explore the 3D space under their own control. This threw up all kinds of interesting questions about continuity and <em>video time</em> vs <em>game time</em>. That is, the player experiences each use of a clip of video in a strictly linear fashion, so we had to be careful with the clips mutating global state. If we use a video clip where someone places down a cup of coffee, then each subsequent clip we use must show the coffee cup on the table. Video clips that were designed for reuse must be neutral as much as possible.</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="60%">  
          <source src="/assets/video/playdeo/apple_table.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/apple_table.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>






  
  
<figure><img src="/assets/img/playdeo/drawing_line_commit.png" alt="The line drawing mechanic being committed" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<div class="bg-indigo-300">
  <img class="object-cover h-24 w-full" src="/assets/img/playdeo/drawing_line_commit.png" />
</div>
<p>The next stage of prototyping was unlocked by Jonathan Topf. Because of his work on Trickshot, he had a good feel for players using a touch screen as the primary control system. His insight was to allow players to draw an intended path of movement for the character, rather than manipulating indirect controls like a touchpad or virtual joystick. I remember being really impressed at the time, and that this method of input was right for our game, and I commented on the commit in our Slack.</p>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="60%">  
          <source src="/assets/video/playdeo/sharpen_example.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/sharpen_example.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
<p>We shot a demo called <strong>Sharpen</strong> which featured a player controlled inanimate object, not yet an Avocado, but a pencil sharpener. During the intro, our human in the scene would drop a magic droplet of liquid on to a regular pencil sharpener and eyes would suddenly spring out, and it would be brought to life, much to the delight of the human.</p>
<p>In this demo, we had a basic task for the player, and they&rsquo;d have to find some items strewn around the play area, giving us our very first taste of player tasks in the form of these <em>fetch quests</em>. It also allowed us to get a better feel for drawing lines to navigate, cutting between different cameras, each of which would need to carefully frame the playspace, and we also explored the idea of cinematic cameras, which looked good, but did not invite immediate interaction, as it might be in motion, or it might not frame the table surface in a way to make line drawing easy.</p>
<p>We also shot it with a very shallow depth of field, and this taught us an important lesson, because the more extreme the DoF effect, the less stable the motion track we got. If we were going to make a lot of these cameras work correctly, we needed to be much more modest with the focal plane depth and position.</p>
<p>Sharpen was then followed by a more fleshed out idea named Tiny Frankenstein. It kept the idea of inanimate objects brought to life, but incorporated Jon&rsquo;s new procedural walking system, to give them characters much more life. Alpha 1 was shot with Jack as the mad inventor with a makeshift set designed to test blocking, but it was becoming increasingly obvious that we needed to start looking for an actor that would help us tell the story properly, and a space which could be larger and more lavishly decorated. Our pencil sharpener hero had now become an avocado.</p>
<p>Alpha 2 featured a custom built set, <a href="https://www.imdb.com/name/nm6697463/" target="_blank" rel="noreferrer">Katie Reece</a> as our hero inventor Billie, an early version of Avo with no arms, props with special effects and a much stronger narrative, with lines of dialogue. This would be influenced by Wallace and Grommit, and a desire to keep things simple, so our protagonist would be silent. As much of the editing would be done outside of the game engine, in order to minimise the amount of unused video in the application, but still allow us to tweak the timing on sequences dynamically. This meant keeping handles on the clips. This is a post production term where you give yourself extra footage at the front and back of each clip, with the expectation that you&rsquo;d start playing them 1 or 2 seconds from the beginning, and stop playing them 1 or 2 seconds from the end.</p>






<div class='container'>
  <div class="mx-auto">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2 justify-items-center ">
      

<div class="flex items-center justify-center">
  <figure>
    <video style="margin: 0"  muted autoplay loop playsinline>  
      <source src="/assets/video/playdeo/alpha_2.mp4" type="video/mp4" />
      <source src="/assets/video/playdeo/alpha_2.webm" type="video/webm" />
    </video>
    
    <figcaption class="text-center">Avo started out life with just his legs in Alpha 2</figcaption>
    
</figure>
</div>

    </div>
  </div>
</div>
<p>Alpha 2 got more and more polish, and it began to generate its own gravity. Exciting, fun, full of heart, and finally something we could scale up into a full game. Ryan North and Gemma Arrowsmith were brought in to help us create a fun story. It ended up being wildly ambitions and needed scaling back, but the bones of it were there. What followed was location scouting, set building, bespoke prop creation, full script development, table reads and all of the usual aspects of a full TV production, except done on a small budget, and very much in the guerilla film making school.</p>
<p>It&rsquo;s at this point where we&rsquo;d finished the raw invention phase, and tipped into production, polish and delivery. I cover a lot more of this in the technical post, but filming started in May 2018, and continued for approximately 10 weeks. I was split across supporting our data pipeline and writing systems in Unity. We used Black Magic&rsquo;s Resolve, which used to corrupt timelines frequently, and I had to hook up a snapshot system for Postgres to allow us to roll back efficiently when this happened. I also had to work on scaling up the Python based processing script to cope with a vastly larger number of clips running through it. We were now under pressure to deliver a large volume of work quickly, so I needed to put in way more safeguards and cross checks to prevent human error from slipping by unnoticed.</p>
<p>From about September 2018 to January 2019 we implemented the 8 episodes seen in the main game. We added subtitle support, full music and sound effect support via <a href="https://www.audiokinetic.com/products/wwise/" target="_blank" rel="noreferrer">Audiokinetic&rsquo;s WWise</a>, bluetooth audio support, the save checkpoint system, localisation, analytics, general UI, IAP integration, On-Demand Resource support, AR mode, low and high resolution videos, and a whole host of other things. We had no specific producer, so we took turns to run our weekly planning meetings.  were crucial for establishing bottlenecks, and towards the end I was generally responsible for keeping the flow of work steady, as it became more and more technical. It was a remarkably intense time, and for the most part highly productive.</p>




<div class='mx-auto'>
  <div class="mx-auto px-2 my-1">
    <div class="grid grid-cols-1 md:grid-cols-1 gap-2">
      <div class="flex justify-center rounded-sm">
        <figure>
          <picture>
            <img class="60% rounded-md" src="%25!s%28int=1%29" />
            <figcaption>60%</figcaption>
          </picture>
        </figure>
      </div>
    </div>
  </div>
</div>
<p>We finally launched Avo at the end of January 2019, and it has gone on to have nearly 4 million downloads, and is regularly promoted in the App Store to this day. For our first title I consider it a huge success. While it may seem coherent and polished from the outside, it really was a hard won product from 3 years of inventive exploration in a brand new medium.</p>
<h2 id="the-making-of-video" class="relative group">The &lsquo;Making of&rsquo; video <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-making-of-video" aria-label="Anchor">#</a></span></h2><p>Timo and Jack organised a <a href="https://www.youtube.com/watch?v=Za4zDlRYrZ4" target="_blank" rel="noreferrer">behind the scenes video of Avo&rsquo;s production</a>.  I&rsquo;ve uploaded to our instance of <a href="https://crank.recoil.org/" target="_blank" rel="noreferrer">PeerTube running on recoil.org</a>. It&rsquo;s a lovely grand tour of the studio at that time.</p>
<div class="relative h-0 overflow-hidden max-w-full w-full" style="padding-bottom: 56.25%">
  <iframe class="absolute top-0 left-0 w-full h-full" sandbox="allow-same-origin allow-scripts allow-popups" src="https://crank.recoil.org/videos/embed/8cbf6ffc-89ff-4cb6-bdd5-9b9525f8c318" frameborder="0" allowfullscreen></iframe>
</div>



<div class='mx-auto'>
  <div class="mx-auto px-2">
    <div class="grid grid-cols-1 gap-4 justify-items-center">
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="/assets/video/playdeo/avo_photogrammetry.mp4">  
          <source src="" type="video/mp4" />
          <source src="" type="video/webm" />
        </video>
      </div>
      <div class="flex justify-center">
        <video muted autoplay loop playsinline width="/assets/video/playdeo/avo_photogrammetry.mp4">  
          <source src="/assets/video/playdeo/avo_photogrammetry.mp4" type="video/mp4" />
          <source src="/assets/video/playdeo/avo_photogrammetry.webm" type="video/webm" />
        </video>
      </div>
    </div>
  </div>
</div>
]]></content:encoded>
      </item>
    
      <item>
        <title>About</title>
        <link>https://nick.recoil.org/about/</link>
        <guid>https://nick.recoil.org/about/</guid>
        <pubDate>Mon, 31 Jan 2022 18:09:53 UTC</pubDate>
        <description>&lt;![CDATA[]]></description>
        <content:encoded>&lt;![CDATA[<p>You found the secret page! There isn&rsquo;t a specific about page, the whole site is about me.</p>
]]></content:encoded>
      </item>
    
      <item>
        <title>How MOO coupled product innovation with mass manufacturing</title>
        <link>https://nick.recoil.org/work/moo-nfc/</link>
        <guid>https://nick.recoil.org/work/moo-nfc/</guid>
        <pubDate>Wed, 21 Oct 2015 09:47:06 UTC</pubDate>
        <description>&lt;![CDATA[]]></description>
        <content:encoded>&lt;![CDATA[<p>Hardware, as they say, is hard. In helping develop the concept of <a href="http://moo.com/products/nfc/" target="_blank" rel="noreferrer"><strong>Business Cards+</strong></a> into a fully realised, mass manufactured product that ships worldwide, we faced a series of engineering challenges of which needed some clever thinking to overcome. I’d like to share some of these with you in this post.</p>
<h2 id="in-the-beginning" class="relative group">In the beginning <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#in-the-beginning" aria-label="Anchor">#</a></span></h2><p>To set the scene, in 2014 <a href="http://moo.com/" target="_blank" rel="noreferrer">MOO</a> restarted a project whose aim was to embed NFC chips into its business cards, transforming them into a smart, connected product. A similar initiative was attempted a few years earlier, but it proved to be a little too far ahead of its time and had to be put on hold. In the intervening years technology has finally caught up with the vision.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_printed_nfc_board.jpeg" alt="Printed electronics — The black rectangle is the actual NFC chip" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>These new business cards would use cutting edge printed electronics coupled with MOO’s paper know-how. Inside each card is a tiny NFC chip bonded directly onto paper and programmed with a unique URL. This URL allows the owner to customise its behaviour whenever they like, even after those cards have been given out to prospective clients and businesses.</p>
<p>I came on board to help MOO design, develop and implement an end-to-end system which could produce the physical cards themselves, along with the online services that the cards would seamlessly interface with.</p>
<p>I knew this would be a difficult task, but this is where my history with connected products becomes important. Back in 2012 I worked at the design consultancy <a href="http://berglondon.com/" target="_blank" rel="noreferrer">BERG</a> on a product called <a href="http://littleprinter.com/" target="_blank" rel="noreferrer">Little Printer</a>. It was an internet-connected thermal receipt printer with custom hardware, firmware and software that we designed and engineered entirely in-house.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_little_printer.jpeg" alt="Little Printer — The most complex product I’ve ever helped build" class="mx-auto my-0 rounded-md" />
</figure>
'</p>
<p>I learned a lot in helping bring it to market, and one of the trickiest aspects to get right was the join between the <em>physical</em> and <em>virtual</em>. With connected products, you have something which not only exists in your hand, it also lives in the cloud, and the production process needs careful construction to ensure both are made successfully and are correctly linked together.</p>
<p>With <strong>Business Cards+</strong>, this means that the machinery and process we needed in order to program each card must be smart and fully connected to MOO’s backend systems, and far more sophisticated than anything that had been implemented to date. We went through a lot of early brainstorming to try to break down the problem into compartmentalised and separable areas of interest.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_brainstorming.jpeg" alt="MOO’s Chad Jennings leading an early brainstorming session" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<h2 id="mass-production-principles" class="relative group">Mass production principles <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#mass-production-principles" aria-label="Anchor">#</a></span></h2><p>In order to produce these within MOO’s existing facilities, we needed to obey some core principals:</p>
<ul>
<li>Be quick and easy to operate with minimal training</li>
<li>Be scalable should production demands increase</li>
<li>Introduce minimal disruption to the existing processes</li>
<li>Use inexpensive hardware to keep costs down</li>
</ul>
<p>Existing NFC programming systems for factories can cost upwards of six figures, and the software integration costs can push that up even further, so using relatively inexpensive commodity hardware to program the NFC chips was going to be very important to us. Not only does it keep costs down, but it gives us flexibility in how we break down and arrange the individual production tasks.</p>
<p>MOO’s well-established production system makes business cards 25 at a time, grouped together on a large sheet of high quality card. To make the programming process as efficient as possible we designed a system which would use 25 separate USB NFC programmers to write to every single card on a sheet at once. This would keep efficiency and parallelism as high as possible while fitting within the constraints of the existing process.</p>
<p>We also use a barcode scanner to read specific information printed on each sheet. This information is used to make API calls back to the central servers so that we’re able to retrieve each customer’s specific data to embed on each card. The barcode scanner would ideally be the only means of input, and is robust and reliable while maintaining operational simplicity.</p>
<p>To manage all of these peripherals we use a small single board computer running a stripped-down Linux distribution, and along with some clever scripting it binds the separate components into a single highly streamlined appliance. Below you can see a picture of a very early prototype testing multiple NFC devices in parallel.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_prototyping.jpeg" alt="Beginning the software prototype using a Banana Pi and 4 NFC devices" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<h2 id="induction-and-iteration" class="relative group">Induction and iteration <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#induction-and-iteration" aria-label="Anchor">#</a></span></h2><p>One of the important aspects of our work was the physical form of the machine. The ability to communicate with any NFC chip is done through induction, and there are certain rules you need to follow with <a href="https://en.wikipedia.org/wiki/Near-field_magnetic_induction_communication" target="_blank" rel="noreferrer">inductive coupling</a>. In order to maximise the speed and reliability of any NFC data transfer, the gap between the NFC tag and the USB or phone device communicating with it must be as small as possible.</p>
<p>Given the dense arrangement of our 25 NFC chips, we needed to construct a bed to house each USB device which brought the circuit board flush with the surface where you rest the sheet of card.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_original_hardware.jpeg" alt="The original 2012 NFC hardware" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>I worked closely with Phil Thomas, one of MOO’s Product Designers, and he did an amazing job of modernising the original clear prototype from 2012 according to our newer hardware and stricter spacing requirements.</p>
<p>Phil worked on a number of iterations of the physical design. First to update the USB hardware, then a second to incorporate closer placement of the antenna flush with the surface, and third to accommodate a change in the antenna placement in the card themselves.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_new_hardware.jpeg" alt="Phil’s foam board prototype to test alignment" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>With each iteration, Phil designed the new layout in Solidworks, printed out a stencil on paper and then painstakingly cut through his stencil sheet into a piece of foam board below using a scalpel. In less than an hour we could test a new layout with all of the components to make it functional.</p>
<p>Once we had assembled our final iteration, Phil then arranged for the pieces to be prepared in thick laser-cut acrylic to form a robust case when put together. It needed to be strong enough to be sent as air freight and also survive in a busy warehouse environment.</p>
<p>Not only is laser-cut acrylic relatively quick and affordable to get made, it actually gives you a nice smooth surface finish. Phil had delivered a physical enclosure that was as well thought out and tailored as the software inside.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_final_case.jpeg" alt="Phil’s laser-cut acrylic case, ready to ship" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<h2 id="copper-gremlins" class="relative group">Copper Gremlins <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#copper-gremlins" aria-label="Anchor">#</a></span></h2><p>During the later stages of development we hit a particularly difficult issue when scaling up to running all 25 NFC devices at once. Whenever we ran our tests, we saw a low but consistent chance of permanently corrupting the NFC chips. This was highly unusual behaviour, and no matter what we did we could not find a reason why.</p>
<p>These sorts of bugs are some of the worst you can discover, being serious, difficult to reproduce and in turn awkward to correlate against possible sources. We lost a number of weeks writing rigorous testing processes, and repeatedly running our tests while being careful to only change one variable at a time. This was one of the lowest points of the project, and it put the whole idea of using off-the-shelf components into jeopardy.</p>
<p>After slowly ruling out much of the software, I eventually began to suspect this was a hardware issue, so we took the entire machine up to Cambridge. During the development of Little Printer in 2012 I was lucky enough to work with <a href="http://www.linaud.com/" target="_blank" rel="noreferrer">Alistair May</a> who specialises in RF (Radio Frequency) electronics, and he has a lab in one of Cambridge’s many leafy science parks helping people with these sorts of issues. We gave ourselves a day to work on the problem, and if the source of this corruption couldn’t be found we would have to scrap all of our work done so far!</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_lab_diagnosis.jpeg" alt="Using some very high-end equipment in Alistair’s lab" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>After a very tense session using some very expensive oscilloscopes and other elaborate bench equipment, we finally made a breakthrough. We had discovered a flaw in the design of the NFC readers which was compounded by very cheap USB cables where the manufacturer had skimped on the amount of copper in the wires of the cable itself, and it formed a sub-standard electrical connection.</p>
<p>This in itself is a fascinating lesson. It’s generally common knowledge that <a href="http://www.expertreviews.co.uk/tvs-entertainment/7976/expensive-hdmi-cables-make-no-difference-the-absolute-proof" target="_blank" rel="noreferrer">premium HiFi or HDMI cables advertised as being better through the use of oxygen or gold is nonsense</a>, especially where a digital signal is concerned. These cables either work or not, surely? Digital by definition means <em>something</em> or <em>nothing</em>, so how does a lack of copper in our USB cables produce such strange analogue behaviour?</p>
<p>Well the detailed science behind this is something which deserves its own post, but <a href="http://www.yoctopuce.com/EN/article/usb-cables-size-matters" target="_blank" rel="noreferrer">the underlying issue here is one of resistance</a>. You can think of a cable with normal amounts of copper as a fat straw, and one with reduced copper as a thin one.</p>
<p>USB devices need power to run, and NFC specifically takes this power in gulps. If the diameter of the straw is too small (i.e. not enough copper wire), then taking quick gulps becomes extremely difficult, and this is what the oscilloscope trace shows below. The bottom yellow area should be flat, and because of our cheap cables it cannot get power quickly enough, causing it to become wavey, and it’s this unwanted waviness which corrupts our chips.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_diagnosis_trace.jpeg" alt="The ‘scope trace showing the source of our data corruption" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>In order to overcome this issue we now custom modify each and every USB device with an additional high speed smoothing capacitor to ensure it works reliably.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_modifications.jpeg" alt="Each USB device we use requires modification by hand" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>Returning from Cambridge with a fully working system, our thoughts now turned towards mechanisms which would allow us to give feedback on the chip programming process.</p>
<h2 id="simple-chips-smartlamps" class="relative group">Simple chips, smart lamps <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#simple-chips-smartlamps" aria-label="Anchor">#</a></span></h2><p>As with any electronics there are occasionally some failures, and we see this with our sheets of NFC chips where some are non-communicative. In order to alert the operator to a chip which hasn’t been successfully programmed, we needed some kind of status display system for our NFC programmer.</p>
<p>Not too long ago I was lucky enough to work on a research project for Google around the concept of <a href="http://berglondon.com/blog/2012/12/19/lamps/" target="_blank" rel="noreferrer"><strong>Smart Lamps</strong></a>, and I knew it would be a great fit in this instance.</p>
<iframe src="https://player.vimeo.com/video/55524083?h=5b6da37069" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe>
<p><a href="https://vimeo.com/55524083">Lamps: 24 Rules for smart light</a> from <a href="https://vimeo.com/bergstudio">Berg</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
<p>Due to the thickness of the cards themselves, it’s impossible to illuminate any cards from underneath without an extremely bright light source, and we have already discounted the idea of a monitor on which we’d display any errors; there would be too many opportunities for transcription errors looking between the screen and the cards, especially in a busy environment. This leaves us the idea of direct illumination from above.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_testing_projection_1.jpeg" alt="Our very first tests with projection mapping" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>To achieve this we use a <a href="https://en.wikipedia.org/wiki/Handheld_projector" target="_blank" rel="noreferrer">pico projector</a> connected directly to the HDMI port of the single board computer, and the output image is calibrated to match the surface of the machine itself. The hardware is inexpensive and, after some calibration, each of the 25 cards can be individually highlighted with any shape and colour we want.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_testing_projection_2.jpeg" alt="Our fully calibrated and functional test rig" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>Some of the designs on the cards can be very detailed and colourful, and coupled with bright ambient illumination in the warehouse, we needed to ensure maximum contrast for the projected light. We ended up using colour, flashing and motion to achieve this aim. Below you can see cards being highlighted with a “good” status after programming.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_projection_light.gif" alt="Green indicates successful programming" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<p>This projection mapping technique has not only reduced the chance of human error, it’s simplified overall operation of the machine, reducing the amount of training needed. This was the last piece of the puzzle, and with operator feedback implemented, we had a finished system.</p>
<p>






  
  
<figure><img src="/assets/img/moo/moo_pp_factory_installation.jpeg" alt="Our finished programmer, complete with custom projector mount, installed in the warehouse" class="mx-auto my-0 rounded-md" />
</figure>
</p>
<h2 id="conclusion" class="relative group">Conclusion <span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"><a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#conclusion" aria-label="Anchor">#</a></span></h2><p>With a relatively modest capital outlay, we’ve introduced the means to produce a technically demanding product into MOO’s existing production line with minimal disruption. It’s flexible and scalable to match our needs, while being simple to use and maintain.</p>
<p>I hope you’ve enjoyed this look at some of the challenges we’ve faced in <a href="http://moo.com/products/nfc/" target="_blank" rel="noreferrer"><strong>Business Cards+</strong></a> to market. It’s been a challenging project on many levels, and getting to see this work as a key part of the <a href="http://www.moo.com/blog/2015/10/12/moo-presents-paper/" target="_blank" rel="noreferrer">worldwide launch</a> is incredibly satisfying.</p>
]]></content:encoded>
      </item>
    
  </channel>
</rss>
