MindsEye Artist’s Kit

Art Examples

  • This high-resolution image of the Earth has been processed using the Deep Dream algorithm, which enhances latent patterns found in the image:
  • Here is an animation of several different style transfer results, each inspired by the style of a different painting, each rendering a content image of the Taj Mahal:
  • Abstract textures, with optional support for tiling, can also be generated:
  • These techniques can also be hybridized to create computer-generated abstract art:

Background

Requirements

  1. An active AWS account, with root user and/or full permissions. This account itself is free to sign up for, though the machine time is a paid resource.
  2. AWS Command Line Tools — Used to configure the local AWS credentials.
  3. A Git client — Used to retrieve and manage the code for this project.
  4. Java 8+ JDK — Code is built locally and deployed automatically, so you need the tools to build Java.
  5. IntelliJ (or compatible tool such as Eclipse or Maven) — Needed to open, build, and run the project. Further instructions assume the reader is using IntelliJ.

Environment Setup

  1. Sign up for an AWS Account
  2. Install the AWS Command Line Tools
  3. Configure the user login for your system
  1. Make a free Github Account — This is needed to fork the mindseye-art project, if you want to post your own code modifications online.
  2. Install a Git client — This is needed to download the mindseye-art project.
    Alternately, GitHub provides a nice graphical user interface client.
  1. Java 8 JDK — The Java Development Kit is used to build Java source code
  2. IntelliJ — The community edition is free and works great; Mindseye was developed using it.

Project Setup

  1. Fork the project — This lets you publish your own work. This is optional and you can do it later, but it is easier to do as step #1.
  2. Clone the project — Use Git to download the project to your machine
  3. Load the project in IntelliJ — This will download all other dependencies

Execution

  1. Select a script — for example, com.simiacryptus.mindseye.deep_dream.Simple
  2. Edit the script if desired, for example by editing these common parameters:
    Verbosity — If set to true, the report will include details of the training process used to generate the images.
    Max Iterations — Sets the maximum number of iterations used in each training phase
    Training Minutes — Sets the maximum number of minutes used in each training phase
    Input Image — Each of these scripts use one or more input images
    Output Resolution — The resolution of the output image. Most scripts will risk running out of memory at a resolution of around 1000px — The HiDef script variants should be used to deal with this limit.
  3. Run EC2 entry point — In IntelliJ, you can right click the EC2 inner class and select “Run Simple$EC2.main()”

First Time Setup

  1. S3 Bucket — This stores published results and code deployed to EC2
  2. IAM role — A non-administrative role is configured for the EC2 node
  3. EC2 Security Group — Configures networking security
  4. SSH Keys — Used to control the EC2 node once launched

Monitor Execution

  1. Browser Windows Open — The process will open two browser windows; the first deploys the logged progress of the local process, which dispatches the remote task. The second browser window opens after the remote process has been started, and displays the output of the main script being remotely run.
  2. A “Start” Email is Sent — The user is notified by email with links to monitor output progress and to manage the node in the AWS Console.
  3. A “Finished” Email is Sent — This email includes the full output of the script, with appended links to the HTML, PDF, and ZIP formatted results.

Script Families

  1. Content Image — A single input image is given, which is input directly and gradually altered.
  2. Per-layer “mean” coefficient — These coefficients anchor the result to the original content by providing a penalty for L2 deviations from the ground-truth signal at a given layer.
  3. Per-layer “gain” coefficient — These coefficients determine the strength of amplification for each layer’s signal
  1. Content Image — The primary input image determines the content. This is first passed through a degradation process to initialize the output image, which is then evolved using the feature signals of the undegraded input.
  2. Style Image — One or more images style images are also input; these are pre-processed to gather aggregate metrics which describe the overall patterns and textures contained within.
  3. Per-layer “mean” coefficient — These coefficients determine how tightly to match the mean values of each feature channel on the given layer with the target style.
  4. Per-layer “cov” coefficient — These coefficients determine how tightly to match the Gram matrices of each feature channel on the given layer with the target style.
  5. Per-layer “gain” coefficient — These coefficients add the components used in Deep Dream, causing signal amplification at each layer where set.
  1. Style Image — One or more images style images are input; these are pre-processed to gather aggregate metrics which describe the overall patterns and textures contained within.
  2. Per-layer “mean” coefficient — These coefficients determine how tightly to match the mean values of each feature channel on the given layer with the target style.
  3. Per-layer “cov” coefficient — These coefficients determine how tightly to match the Gram matrices of each feature channel on the given layer with the target style.
  4. Per-layer “gain” coefficient — These coefficients add the components used in Deep Dream, causing signal amplification at each layer where set.

Script Sub-Types

  1. Simple — An example is provided that is as simple as possible, using a single phase and a single set of inputs.
  2. Enlarging — The basic process is repeated over several iterations, between which the working image is gradually enlarged. This can provide new behavior, for example by combining multiple scales and resolutions.
  3. ParameterSweep — This repeats the basic process over a range of input parameters, displaying the resulting progression as a formatted table and as an animation.
  4. StyleSurvey — Only applicable to Style Transfer and Texture Generation, this script iterates over a collection of style images to display a variety of output images, formatted as a table and as an animation.
  5. HiDef — These scripts include special logic for processing high resolution images. This is generally done by breaking the calculation down at some level to process using image tiles.

Script Output Examples

  1. Simple: ZipPdfhtml
  2. High Resolution: ZipPdfhtml
  1. Simple: Zippdfhtml
  2. Enlarging: Zippdfhtml
  3. Style Survey: Zippdfhtml
  4. Parameter Sweep: ZipPdfhtml
  5. High Resolution: ZipPdfhtml
  1. Simple: Zippdfhtml
  2. Enlarging: ZipPdfhtml
  3. Style Survey: ZipPdfhtml
  4. Parameter Sweep: ZipPdfhtml
  5. High Resolution: ZipPdfhtml

Further Reading

  1. Simia Cryptus MindsEye — The parent project, focusing on Java 8 Neural Networks
  2. Original Style Transfer Paper
  3. Original Google Deep Dream Blog Post
  4. CuDNN powers most of the heavy compute
  5. Aparapi is another supported tool for GPU accelerated layers
  6. Google Arts & Culture — Excellent resource for inspiration

--

--

--

Big Data Engineer and Artificial Intelligence Researcher

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

AWS DeepRacer — How We Got Rolling with Reinforcement Learning…Literally!

DeepLearning series: How to structure machine learning projects

Resilient housing joins the machine learning revolution

Bengali.Al Handwritten Graphemes Classification — Initial Blog Post

Build a simple Image Retrieval System with an Autoencoder

Depth Detection with RGB images using neural networks in computer vision

Introducing New Model Operators

Probabilistic and Deterministic Mindsets of Logistic Regression

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Andrew Charneski

Andrew Charneski

Big Data Engineer and Artificial Intelligence Researcher

More from Medium

How to enhance your old photos using GFP-GAN?

LGMVIP Experience of My Internship Journey

Web Development intern experience

LGMVIP-THE INTERNSHIP EXPERIENCE BLOG