Hackday 2017 – think like a startup!

What we did

We’ve done various hackdays in the past, and have really valued experimenting with new technology and getting to know our colleagues better.

However, we’d also found that our previous hackdays had been a bit tech-focused.  Whilst we’d tried to seize the opportunity for developers to teach others about what they were doing, and to draw on a range of non-technical skills in our creations, non-developers had sometimes felt a bit left out.

A second problem we’d had in the past is that hackday teams had spent a lot of time trying to work out what to create and so had less time for actually creating it

To try and address both of these problems, we decided to try something a bit different with our most recent hackday.

First, we produced an automated Idea Generator “fruit machine”, giving teams business ideas of the form <product> for <user-group> using <technology>.  For example, “Facebook for dogs using Amazon Echo”.  We produced a number of these ideas at the start of the day, and teams could pick one.

Second, rather than focussing on building a product, we made it about pitching an idea.  To that end, the intended outputs were…

  • A tech demonstration (which could be a simple “hello world” example of how to use some new tech)
  • A design concept (e.g. wireframe), giving non-designers the chance to learn more about the design process
  • A pitch presentation, including…
  • An Elevator Pitch (using the template below)
  • A Business Model Canvas (using the template below)

Oh, and we finished the day with some classic party games to wind down!

Our Elevator Pitch template looked like this:

Elevator Pitch template

And our Business Model Canvas template like this:

Business Model Canvas template

Read on to find out how the different teams got on.

What the teams say


The idea generator gave us: Tinder for Musicians using Raspberry Pi.

How we got on: We thought the idea was hilarious in the first place, with lots of potential for technical/design experimentation … but probably not much potential to be the next Startup Unicorn. We enjoyed developing the product from ground-up and receiving feedback from the strategy meeting. We split the work into different tasks, but still worked closely together, sharing ideas and laughs. Overall it was a great opportunity to try out different technologies and experience the early life of a hilarious product as it would go, for real, to production.

What we produced: Our initial idea was helping musicians turn their solo performance into the perfect harmonious duet.

Having white-boarded the business proposition, we realised that our target audience needed to change. Unfortunately, we were not going to be able to reach the shy audience stuck in their lonesome bedroom. Our new target: gain mass appeal in large super clubs, configuring a flashing teddy-bear at a concession stand where single clubgoers could hire a “Love Bear” for the evening to help them find a potential match. After roughly 2 hours of on-and-off work, we had two Raspberry Pi-s set and running, a front-end web interface for configuring them, a presentation and plenty of laughs collected all through the development of this insane app.

Any other thoughts? Always label your hardware when you have lots of small identical devices. Grabbing the wrong one in a hurry will not help your live demo.

Screenshot from Duetr's presentation


Team Sunglass went off-piste with their hack.  After brainstorming on their initial idea for a while, they decided instead to build an app that Mat had been wanting to try for ages:

Sunglass is an app which allows you to find local pubs whose beer gardens are currently in the sun.  Perhaps unsurprisingly for a team which included our CEO, they produced an impressive Elevator Pitch and Business Model Canvas (see picture), as well as a fun Thelma-and-Louise-themed presentation.  Having wowed us with a demo of their sunshine-position calculations, they confessed that it was all just a mockup.  However, they’d obviously put a lot of thought into monetising the app and its data streams, so if they ever produce the real thing, they could be onto a winner!

Business Model Canvas from team Sunglass


The idea generator gave us: Flickr for autistic people using Natural Language Processing

How we got on: We found it challenging to get to a specific product idea, but enjoyed working together as a team to devise the concept.  The delays in getting the idea meant we had less time for design than we’d have liked, but still managed to pull together a nice technical proof-of-concept and a light-hearted pitch slide deck.

What we produced: People with autism can have trouble “reading” emotions, so we built an application which took a sentence and used the MoodPatrol API to perform sentiment analysis on it.  This gave back the key emotions associated with the text (e.g. “angry”, “happy”, “proud”) which we displayed to the user along with a flashcard-style image of someone showing that emotion.  And we put together a slide deck for our pitch with the tongue-in-cheek product title of “WhatsUp 2000” (see the picture for a screen mockup).

Screen mockup of WhatsUp 2000

Any other thoughts? We enjoyed the focus on devising a product and pitching it, not just on creating something technical.

Hello Rhino

Screen mockup of "Hello Rhino" app

The idea generator gave us: “Wikipedia for children using Augmented Reality”. We gravitated to this (I think) because we liked the sound of using augmented reality, recognised Wikipedia as something many of us used daily, and it felt like a really rounded subject with loads of potential.

How we got on: We developed our product idea really quickly as we fortunately had Becky who would be a target user for our planned product. Our elevator pitch statement: “For parents (Becky!) of young children (3-8) who have the problem of pacifying and entertaining/distracting a mithering child… Hello Rhino will entertain and educate your child, unlike many ‘junk’ kid apps and games.”  And we also produced some designs and sketches on our ideas of the product.

What we produced: The basic child-friendly user flow for the phone app was identified:

– See an interesting thing that they wanted to find out more about by looking it up on Wikipedia. The example we used was a Rhinoceros at the Cotswold Wildlife Park.

– The parent/child would then open the application on a mobile phone app.

– They would be able to take a picture of that animal.

– The app would then identify the animal, and read out some interesting text from the Simple English version of Wikipedia.

We produced a very lightweight proof-of-concept to test a couple of technical things. Many mobile apps are able to take photographs of things, and we didn’t feel the need to prove that that was possible. However, there were three parts of the above flow that we’ve wanted to validate:

  1. Identifying an animal from an image using an existing online service.
  2. Retrieving a child-friendly snippet of text about that animal from an existing online service.
  3. Reading the text out using text-to-speech.

We wrote a short script which chained these three things together, turning a photo into a readout about the detail of the main subject of that photo.

The steps were implemented using the following services:

  1. We submitted the target photo to the Google Vision API, which returned a list of keywords for that photo. Each keyword told us things about the photo. For example, it might say 85% Rhinoceros, 30% Sand, 29% Outdoors. We simply extracted the highest rated item from the list which was (almost) always the subject of the photo.
  2. We then posted this keyword to Simple Wikipedia. This is a simplified version of wikipedia, which, although not targeted at children, does at least lower the barrier to entry and not use overly complicated words which a child might not understand. We took the first paragraph returned by the API (here’s an example) and passed that onto the next service.
  3. Finally, we passed the result of the Wikipedia text to our Macintosh’s Say command. This command reads out the text in a slightly robotic, voice. We had a lot of fun trying to choose a voice that wasn’t too strange, and could be understood easily by a child.

Any other thoughts? Although we felt like we could easily create a start-up and develop a real product, when dry-running our plans for revenue, we quickly recognised there’s no money in apps… booooo!

Hackday conclusions

Many people were glad of the help in choosing ideas, but to some, this felt too rigid.  So we would use the idea generator again, but make it clearer that teams were free to invent their own ideas too.  And we’d probably adjust the variables of the idea-generation fruit machine – some of the combinations were just too odd!

People seemed to enjoy learning more about pitching an idea, so we would recommend including those elements (e.g. Business Model Canvas) in the hack day, but perhaps we expected teams to produce too much.  This could perhaps be mitigated by structuring the day differently so that there was specific time set aside for the business-focussed deliverables.

Most of all, though, we enjoyed the chance to have fun, get to know each other better, and to learn by doing something different from our day jobs, so roll on the next hack day!

Post a Comment

Your email address is never published nor shared. Required fields are marked *

Ready to talk?

Whether you want to create a new digital product or make an existing one even better, we'd love to talk it through.

Get in touch