Keiichi Matsuda

Leave a comment

Starting to reinvestigate Augmented Reality for my masters project I came across Keiichi Matsuda, an Architecture graduate who has created films that look at how we might see things around us in the future.

What is most amazing is that he started visualising this AR enhanced world 4 years ago…

Augmented (hyper)Reality: Domestic Robocop

The latter half of the 20th century saw the built environment merged with media space, and architecture taking on new roles related to branding, image and consumerism. Augmented reality may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it.

A film produced for my final year Masters in Architecture, part of a larger project about the social and architectural consequences of new media and augmented reality.

From the website

Cell is an interactive installation, made in collaboration with James Alliban. Commissioned by Alpha-ville for the 2011 festival, cell plays with and proposes alternative landscapes in the technological ether surrounding our everyday movements. As our identities become deliberately constructed and broadcast commodities, our projected personae increasingly enmesh and define us. Cell acts as a virtual mirror, displaying a constructed fictional persona in place of our physical form. Composed from keyword tags mined from online profiles, these second selves stalk our movements through space, building in size and density over time. The resulting forms are alternate, technologically refracted manifestations of the body, revealing the digital aura while simultaneously allowing us escape from our own constructed identities.Cell uses Microsoft’s Xbox Kinect to track visitors as they interact with the installation. It was built in openFrameworks, an open-source toolkit originally built to teach artists and designers creative coding. Microsoft have supported the project from the early stages, working with Brighton based company Matchbox Mobile and the openFrameworks community to build a new code library (or addon) specifically for cell that supports the Kinect For Windows SDK. This is an important development in the field of interactive art. Providing openFrameworks users easy access to the official Kinect For Windows SDK, places the technology directly into the hands of a large international community of interaction designers and new media artists.

At Last, but it wasn’t easy

Leave a comment

Today has been an up and down day, I had to go into work, so lose 3 hours straight away.

Knowing I was going to be in town I had emailed Jon Maxwell, my contact at the Castle Museum, in the hope he might be about so I can touch base and show him where I am.

Luckily for me he was and I was able to show him the working static model through the Aurasma app and using my trigger image.



I think he was impressed with the 3D model appearing over the trigger, but wanted to know where it would go from there. He is very interested in apps and the Castle doesn’t have one, I told him about my trip to London and the guys there and how they are working it and trying to move maybe towards html rather than proprietary apps, but I know how easy it would be to create an app for them with extra content on their bigger, more popular exhibits, plus a small section for their special collections, it would be simple but a lot of hard work. He would need to look at the viability of implementing it, cost etc, but as I said to him, anything I do for the Bustard he can have as it’s my MA project and my idea to bring to fruition… but as yet no-one in the Norwich Castle Museum is even thinking in this area…

What I realised when I was there and watching him look at it though is the trigger image would need to be much, much bigger if it were to be a floor graphic.


I tried it in situ and also took a photo of two pieces of paper in front of the cabinet, but it still was way off the size I imagined it to be, on reflection it would definitely need to be A2+.

Also, the static model was so dull I really needed to make that animation work.

As soon as I got back home I set about stripping out curves, joints, NURBS, animation etc from the static model which had worked in the Aurasma platform, I thought I had better check it, so popped in a couple of lights and started the export process..

oh no..


Again and again Maya quits on me as I try and export my collada file, I restart my machine, I re-save my scene with a new name, but no luck… Will this ever end…

I look through my scene, surely the  lights and the fact the model is made up of three different materials can’t be a problem,, can it.. I take them all out, it exports…

Back to animating them the simple way, let’s just get some movement in there, I bob the body and dip the head, but without the skeleton and joints, it just looks so awkward… Well, let’s see if we can get some animation through the Aurasma studio… it works, not the best animation, but it works…

Back to Maya, refine the animation slightly, add in some leg movement – this animation will not win any awards, it’s probably the worst animation I have ever done, but I tell you what, it goes through Aurasma again.

It’s looking a little dark, dare I add in a light? I just put in a single spotlight, it goes through…

I try another, the lighting works better now, although the colour is dull, dull, dull.

Looking at the model jigging about in it’s awkward animation I think it’s time to add a little more information into the Aura, it looks good, but an animated bird doesn’t get over what the AR can do, this is more than ‘here’s an animated bird’ job done, it’s about getting over more information, if you need it, if you want it. So I quickly work up a couple of buttons and an information box in illustrator.

GB_info_button GB_info_txtGBOW

I add these into the advanced actionset in the Aurasma studio and check out the result, I like it, but I wish I could get some texture onto the model.

I go back into maya and brave the UV mapper, I re-jig some of my images and fit them onto the shapes, it’s tricky but I get them looking good in Maya.


Upon upload into the Aurasma studio in the 3D preview window it still looks good, I am hopeful.


But when viewed on my iPad the images are all over the place, it’s so disappointing, it would really put the cherry on the cake…

I re-jig and refit and reformat the images, but it doesn’t help on the outcome, I now have 16 slightly different files that I have created in an attempt to get this working.

I look back into the Aurasma guidelines for 3D submission

  • Individual texture maps (.png format) must be of a dimension to the power of 2 (we recommend 128×128, 256×256, 512×512 or 1024×1024). Textures cannot be larger than 1024×1024 pixels.

Ok, I go back into photoshop and make all of my images exactly 512×512 and export as png’s, reattach them to the model and upload, fingers crossed…

Aurasma first 3D model test

1 Comment

After managing to create my first 3D model of the Bustard in Maya, it’s now time to test it through Aurasma.

My trigger image is made and slightly updated using the full colour logo from the Great Bustard group. I think it works much better as a visual key as to what you are going to see when the Augmented Reality appears. It also means that there is more ‘uniqueness’ to the icon/trigger and therefore should be easier for Aurasma to identify the correct overlay, as I have had icons that are too similar confuse Aurasma, so the more complex it is the better.



First you have to get your 3D model ‘Aurasma’ ready, this means I need to export it from Maya as a collada (DAE) file, fortunately since finding the exporter online, I find that Maya 2013 has this function built in, so don’t even need to download it. So export your .dae.

Create your thumbnail which must be 256 x 256 pixels…



Make sure any textures you have used as a UV map are in the same folder…


then combine these 3 elements into a .tar folder (much like a zip file) I used the free 7zip software. Now it’s ready to go onto the Aurasma studio and load up your new trigger and overlay.

The trigger for Aurasma must be exactly the same image as your printed trigger, then you add an aura, which is the 3D .tar folder.



you have a few controls with which to place your 3D model, then it’s a matter of pointing your device at the printed trigger image to see if it works!

I had quite a few stops and starts with this, making sure the Aurasma thumbnail was in the .tar folder, making sure the Maya model had been converted to polygons and all history deleted before it would even process, and then the light got converted so the model is a bit dark, but it’s there…

I also UV mapped a couple of textures, namely the wings and the head as you can see from the Aurasma studio screenshot, but these don’t show up when it’s live. I think this is a problem with the lights, with the size of the mapped images, with the different mapped images and maybe with the file type they are, Aurasma asks for png’s but I put in jpg’s just to see if they appear, and they do, the neck is a jpg not a png, but you can see that it is not correctly mapped.

At this point I want to move on and try to animate the model’s legs, then I will revisit the UV mapping problems, I’m sure i can tweak it all to get it working through a bit of trial and error, but in principal, it all works, as a process it’s successful, I just need to be careful how I implement it.

ARt – issue 3

Leave a comment

Link to the brilliant ARt magazine

Really interesting article about augmented soundscapes, an interview with Matt Ramirez from the scARlet team and up to date new exhibitions.

Museums Contact

Leave a comment

One of my three streams for the SNU is to go down to London and see how the big museums are implementing AR currently. After a lot of internet research trying to find who has what and where they are I have found 4,

1) Museum of London

location based apps to show images from the past in the present.

Also Key Roman sites in London, such as the amphitheatre at Guildhall, are brought to life through augmented reality video – produced by HISTORY™

2)Science Museum

James May talks to you about the exhibits

3)Natural History Museum

The Attenborough Studio where visitors can see DA talking (virtually) and showing all sorts of animals past and present on mobile devices supplied by the museum itself to an amphitheatre style presentation.

They also have Augmented reality Coelophysis using a webcam on a computer and trigger image for the camera to see

4) British Museum

Use mobile phones to follow an augmented reality trail around the Museum and solve clues about the ancient Egyptian Book of the Dead

All of these things are great but to get to all 4 in 5 hours will be impossible so I have chosen just the two (Natural History & Science Museum)to go and visit.

I have found a couple of names and departments through investigation in both museums and have discovered that the Science Museum has a New Media Department (Dave Patten – HoD), which will be the right connection, and within that if I want to document my visit I will need to make contact through the press office, so after a brief conversation on the phone have found a lovely lady called Rachael Campbell who I can contact prior to my visit. Similarly with the Natural History Museum although I have Ailisa Barry as the Head of Interactive Media, I spoke to Lucy in her department who has given me the name of Sheila Sang, who I will email/contact  prior to visit as she deals more with AR than Lucy… Interestingly the chap who was looking after their AR has just left them…

So, next steps are to email my new found best friends at the Museum and organise the right date when I can visit and meet with (hopefully) both of them in the one day.

At least my chosen Museums are quite close!




AR Magazine – #3

Leave a comment

The AR Lab is part of a cooperative effort between the Royal Academy of Art (Koninklijke Academie van Beeldende Kunst – KABK) or University of the Arts The Hague, University of Technology Delft and Leiden University together with three companies and is based in The Hague, The Netherlands. The AR Lab is part of the Raak-Pro Research programme AR-VIP: Augmented Reality-Visualisation, Interaction and Perception.

I have already met Yolande Kolstee who heads up the team when I was in London at the Augmented Reality conference in 2012.

FACT 10th Anniversary

Leave a comment

After an international competition in 2011, FACT (Foundation for Art and Creative Technology) awarded the Manifest.AR artist group a commission to work with LJMU researchers Stephen Fairclough and Kiel Gilleade to create significant new augmented reality artworks for “Invisible ARtaffects,” part of FACT’s 10th anniversary exhibit “Turning FACT Inside Out.

I definitely hope to get up to Liverpool and investigate these first hand!

AR in Museums

Leave a comment

I have three aspects to research in my self-negotiated project, one of them being investigating AR in Museums currently. This article by Shelley Mannion, Digital Learning Programmes Manager, The British Museum, entitled British Museum – Augmented Reality: Beyond the Hype is a great short piece that references other front runners testing and looking into AR in their museums.

Among the forerunners are the Stedelijk Museum in Amsterdam which used AR to install artworks in a local park (ARTours), and the San Francisco Exploratorium which turned an evening event into a surreal AR playground (Get Surreal). In 2011, the British Museum’s digital learning team embarked on a plan to explore AR’s potential in museum education. We ran a series of experimental projects that allowed us to push the boundaries of the technology and evaluate its benefits in learning programmes. Our experience confirmed that AR – although technically still immature – has both the unique ability to engage visitors and quantifiable learning outcomes. It is a useful tool in our arsenal of interpretive tools and techniques. (quoted from )

What I find interesting is that The British Museum has been testing their app called ‘passport to the afterlife’ since 2011, it is a trigger related trail with markers which will display 3D objects, and the museum itself provides the device for the visitors to use, so no-one is discriminated against for not having the right phone.

I think this is great, just what I want to see in our modern world, the ancient and long gone being brought to life, real time in our own hands with the aid of technology.

We can learn at our own pace, combining tech and tradition, I still want to go and see those dug up objects and an artists view of what it once was like, but imagine being able to look around it, zoom in and out and gather more information, relevant to your own needs, on a mobile device.

Bringing creatures back to life. Using animated 3D models to show what an extinct animal or plant would have looked like is another ideal use of AR. Holding your device over a skeleton or fossil to reveal an animated model answers an age-old interpretive challenge. The Natural History Museum in London uses this technique to populate a multimedia theatre with early humans, dinosaurs, fish and other animals in the interactive film Who do you think you really are? This is an expensive bespoke implementation with custom hardware, but these types of applications are increasingly easier and cheaper to realise.  (quoted from )



Leave a comment

I got an email reply back from Roger McKinley (Research and Innovation Manager) at FACT  (Foundation for Art and Creative Technology)

ARtSENSE is their museum augmentation and research project and he’s happy to answer some of my questions, or have a skype chat, they are having an event in June and in April they are publishing something that will be useful. So I just need to think up some good questions to ask…

I would like to know if they have done any research on the way that people interact with new technology pieces that are put into Museums, do they work, do people just not know what to do and how do they go about getting the message clear.

What has worked best? (in ref to above q)

How have they found this out, what tests/research tools/studies did they use

I am interested in the imparting information/learning aspect, ie, does the public get it?

I am very pleased to have made this contact, and hope to go and see FACT up in Liverpool as there is nothing like it here.


Leave a comment

Looking to find AR in practice already and I stumble across an international group project called ARtSENSE who are looking into museum spaces and interactive Augmented Reality in exhibitions etc.

“Aimed at improving and augmenting the gallery and museum visitor experience through wearable technology, ARtSENSE is a European research project, in collaboration with two other cultural organisations and five technical and research organisations, including FACT and Liverpool John Moores University…….. For the museum visitor the result is an enhanced, personalised experience, taking them on an innovative journey through the hidden stories of the artworks and artifacts.”

They have founder members of the MANIFEST.AR involved and a this is a brilliant slideshare presentation talking about the future for augmented reality within museums.

This chap is creating AR art already…

I also contacted Roger McKinley who is the Research and Innovation Manager at FACT just to see if I can garner any help or involvement with it…


Newer Entries