Here are some strawman designs for prototypes (i.e., minimal implementation, proof of concept, prototype) of an image-to-Utile pipeline. In Rich Hickey's terms, they are easy, but not simple. That is, they lean on external software, with the associated complexity and foibles. However, they attempt not to choose poorly.

System Level

The top-most (system) level "prints" a stream of input images as Utiles. Here are two overviews, cast as pipelines (|>) from the input image data to the output Utile(s):

image_data           # JPEG, PNG, etc.
  |>  import         # preprocessed image
  |>  model          # abstract Utile model
  |>  render         # STL Utile model, etc.
  |>  print          # 3D-printed Utile

image_data           # JPEG, PNG, etc.
  |>  import         # preprocessed image
  |>  model          # abstract Utile model
  |>  render         # raster images, etc.
  |>  print          # Laser-engraved tile

Careful Reader will note that the first parts of the pipelines are identical. This is convenient (and not accidental).


The import pipeline provides a convenient and general version of the image, encoded as a data structure. The primary component of the structure is a 2D matrix of maps. Each map stores information (e.g., intensity) for a single pixel.

The model pipeline generates an abstract model of the desired tile. This might be as simple as a 2D matrix of Z values (i.e., heights). The model describes the desired characteristics, paying little or no attention to rendering and/or manufacturing issues.

The render pipeline generates a concrete model of the tile, encoded in an industry-standard file format such as JPEG or STL.

The print pipeline renders and manufactures a physical tile, using an output device (e.g., 3D printer, CNC mill). Each device has associated software to create the needed control codes, feed them to the device, monitor for problems, etc.

Importing Images

The import pipeline acquires and processes input images, cleaning up problems and generating data structures in desired format(s):

image_data           # JPEG, PNG, etc.
  |>  import         # importing pipeline
    |>  translate    # image map, as Netpbm
    |>  sanitize     # cleared of obstructions
    |>  normalize    # internal data structures


The translate step generates a 2D matrix of the input image, using Netpbm format.

The sanitize step notes and removes obstructions (e.g., structural problems).

The normalize step converts the image into a convenient and general internal representation (i.e., set of data structures). The primary structure is a 2D matrix of maps, each of which stores data (e.g., intensity) about a single pixel.

Modeling Objects

The model pipeline acquires and processes input images, cleaning up problems and generating data structures in desired format(s):

image_data           # JPEG, PNG, etc.
  |>  import         # preprocessed image
  |>  model          # abstract tile model
    |> simplify      # cleared of "noise"
    |> style         # features emboldened
    |> annotate      # braille (etc) added


Let's assume that the input image is colorized or grayscale line art, as produced by a program such as OmniGraffle or Visio. Our basic task is to map input pixel characteristics (e.g., intensity) into geometric characteristics (e.g., height). Optionally, we may "style" the image (e.g., embolden features) annotate it with braille (or text), etc.

Rendering Objects

Each computer-controlled device requires a specific set of commands that tell it what needs to be done to create the desired object. In one case, these might say to squirt plastic out of a nozzle; in another, these might say to cut away material. Fortunately, generating these commands isn't our problem.

Data Formats

Instead, we can provide a description of the desired surface (e.g., as a mesh of triangles) in the industry-standard STL format or perhaps Makerbot's .thing archive format). Specialized software for the device will then figure out what commands are needed, feed them to the device, monitor errors, etc. (whew!)

There are multiple ways to generate STL. If the surface we need to create is pretty simple, we can code up a custom, minimal STL generator. If not, we can use a 3D modeling program such as MeshLab, OpenSCAD, or SketchUp.

Note: Not all fabrication devices accept STL. For example, a laser engraver may want an image file of some sort.

3D Modeling

There is a wealth of 3D rendering software.

Custom Software

Let's assume that we have a 2D array of desired height values, corresponding to some characteristic(s) of the input image. We can visualize this as an array of tall, rectangular solids (e.g., varying lengths of 4x4 lumber, standing on end). If two adjoining solids have different heights, a vertical wall will be exposed.

We can cover all of the exposed surfaces with rectangles of various dimensions. This produces a continuous, conforming surface (i.e., Manifold). Finally, we can divide each rectangle into a pair of right triangles and emit the result in STL syntax.


If implementing this algorithm becomes difficult, we might use MeshLab:

MeshLab is an advanced 3D mesh processing software system which is well known in the more technical fields of 3D development and data handling. ... It is a general-purpose system aimed at the processing of the typical not-so-small unstructured 3D models that arise in the 3D scanning pipeline. MeshLab is oriented to the management and processing of unstructured large meshes and provides a set of tools for editing, cleaning, healing, inspecting, rendering, and converting these kinds of meshes.


Alternatively, OpenSCAD is an open source utility for creating solid 3D CAD models.

... it is something like a 3D-compiler that reads in a script file that describes the object and renders the 3D model from this script file. ... This gives you (the designer) full control over the modeling process and enables you to easily change any step in the modeling process or make designs that are defined by configurable parameters.


If really complex geometry is needed (e.g., for surface texturing), it may become worthwhile to bring in high-powered assistance. Here is an approach that uses SketchUp (an interactive 3D modeling program) in a non-interactive (i.e., batch) mode. A custom Ruby plugin:

  • reads a file of SketchUp commands
  • creates a grid of rectangular solids
  • adjusts the height of each solid
  • emits the result as SketchUp and STL

image_data           # JPEG, PNG, etc.
  |>  import         # preprocessed image
  |>  model          # abstract tile model
  |>  render         # tile model, as STL
    |> pre_skp       # SketchUp control file
    |> run_skp       # SketchUp model, STL file

The pre_skp step generates a custom control file. This can be serialized in any format that a SketchUp plugin can interpret. In an earlier project, I used Ruby literals for this, but JSON can now be used via sketchup_json.

The run_skp step starts a copy of SketchUp in batch mode, then invokes a plugin to read and interpret the control file (e.g., by invoking methods in SketchUp's Ruby API). When the plugin is done, it terminates the SketchUp run.

Of course, SketchUp can generate and store its own model files (e.g., for inspection and experimentation). A SketchUp extension (e.g., SketchUp STL) can be used to generate STL files.

This wiki page is maintained by Rich Morin, an independent consultant specializing in software design, development, and documentation. Please feel free to email comments, inquiries, suggestions, etc!

Topic revision: r5 - 29 Oct 2015, RichMorin
This site is powered by Foswiki Copyright © by the contributing authors. All material on this wiki is the property of the contributing authors.
Foswiki version v2.1.6, Release Foswiki-2.1.6, Plugin API version 2.4
Ideas, requests, problems regarding CFCL Wiki? Send us email