Hackathon Team Kankou

mithack2017

#1

Team Kankou is an online hackathon team started with Unofficial Community Guide members @jcasman and @codetricity. This team cannot win any prizes. However, if you want to join us online for fun and potential glory, please add your ideas and code to the discussion.

Ideas will be discussed here any time and the project will be decided on between 4pm PT and 5pm PT on October 7th. After that, we’ll build the project. Although we’ll use THETA V cameras, you should be able to build parts of the project with either the THETA S or the SC.

Currently, we have these ideas:

  • use the USB API for a project that shows virtual sales with 360 images in A-Frame. USB API will be controlled from a Raspberry Pi. We’ll start off hosting the images on GitHub Pages
  • use live streaming in Unity to provide virtual training
  • use A-Frame to create a virtual scavenger hunt to find hidden 3D objects

Previous Online Submissions by Team Kankou

DriverEye

We need:

  • what are we selling or showing? Cars and real-estate are kind of old at this point
  • pictures of the thing we’re trying to sell or show
  • navigation ideas for how to move between the 360 spheres. top-down map like shown in the previous idea is kind of old

Team Kankou - Last Year’s Raspberry Pi with USB Cable

Using the

  • Raspberry Pi 2
  • Raspberry Pi Foundation touchscreen monitor
  • portable USB battery
  • THETA S
  • USB API accessed from Python
    • (using crude hack to run ptpcam from Python. Unfortunately not using the Python libptp API. But, hey, the hack worked.)
  • Python GUI with Pygame

Team Kankou Possible Live Streaming Idea

Use this template and create a virtual training room.

We need:

  • 3d assets to put into sphere. Find them online
  • audio is not spatial, so what are the cues to move person to another area

Other ideas for user interface

The ways to interact with 360 images inside a headset is wide open. Should they enter the sphere from a sphere, a square, a cylinder?

Do you overlay spheres inside of the main image sphere?

Do we create a menu at the bottom, possibly breaking the VR scheme?

Equipment

We’ll need to target Android and iOS phones as we don’t have HTC Vive or Rift headsets at home. If we build a Unity app, we’ll need to try and build it for Android, which we have no experience with.


#2

One idea: We could build an extended A-Frame headset gallery that let’s the headset wearer move around within the Presidio in San Francisco. It would not be a large area, but they could go North-South-East-West for maybe two jumps in each direction. I could take the 360 images so that the content is interesting.

Like this:


#3

You’d need to set consistent spacing and direction, probably with a compass and measurement of the distance. How far to make the distance, like 30 feet from the radius in each of North, South, East, West?

This could be like Google Streetview, but with some other AR stuff thrown into the sphere. Maybe like a Pokemon Go game?


#4

Oh, oh, oh! We could add a small item, like a Pokemon Go character, something cool, that the user has to look around for. They would have to look around within the spheres, jumping from one to the next, in order to search for and find the object. A scavenger hunt!


#5

So, the prize they find would be on the third sphere out, like for example, looking for a panda?

My daughter built some panda icons in A-Frame, so no shortage of those… Can you find the panda in the screenshot below?


#6

Yes, that’s the idea. Your daughter is probably way better at coming up with some cool panda character for us. :slight_smile:


#7

Idea

Scavenger hunt where kids or people look for things inside of VR headset or cardboard. 3D assets are placed inside of the spheres for prizes and clues. Winning the game is based on finding the final item. Could mix with historical or geographical learning about physical space. Physical space is built from real 360 images.

Tasks

  • @jcasman takes the pictures and shares them with team on Google Drive folder
  • figure out first sphere North/South/East/West placement of navigation
    • we currently do not know if we can space the spheres out at 90 degree spacing
  • navigation occurs with “stare” at menu inside of headset
  • no plans to use camera orientation data from image metadata right now. might do so in the future, but not for the first phase
  • test text placement inside of sphere, for clues and timer
  • test timer with numbers shown inside of sphere

notes

  • remember you cannot open an A-Frame file locally in your browser. you need to run a web server locally or use something remote like glitch
  • we may still scrap this idea and go back to a live streaming idea as we collect more info.
  • we can run A-Frame as a hybrid mobile app. I have experience building these apps with crosswalk, not Cordova, but it’s also possible with Cordova, I just can’t get it running properly on my phone.

@jcasman maybe Kieran Farr wants to join team kankou for few hours? I think he’s pretty experienced with A-Frame.


#8

Just a quick glance at A-Frame, the position of the menu image appears to not include any directional data. Position is set manually, as in the code here:

<a-entity id="links" layout="type: line; margin: 1.5" position="0 -1 -4">

This still might work just fine, if we know North in the images.


#9

Assuming “0, -1, -4” corresponds to x, y, z, can you place each image at 90 degree spacing?

I can rotate the entire sphere. See this:

Virtual Reality: Transformation Tool for Learning and Equity in Middle Schools

if (data.src === "#theta2") {
  console.log("#theta2 has come up: ", data.src);  
  // data.target.setAttribute('rotation', "20, 190, 0");
  data.target.setAttribute('rotation', "-20, -10, 0");
}

Note that the attribute is rotation and NOT position for the entire sphere. If we’re trying to put one menu 180 degrees from another menu, how do we do it?

Is it possible?

If we can’t place the menus 180 degrees apart, we need to switch to another idea.

If we just move the x coordinate of the menu, we’ll get something like this, where the image goes off on one axis only.

This seems like it would be a common problem with a solution out there. Google search?

This is not a solution, but look what Kieran built

First attempt not looking promising. Can’t view plane if it passes 0 on the z axis. I’m going to try using spheres.

Not working

 <!-- Image links. -->
  <a-entity id="links" layout="type: line; margin: 1.5" position="0 -1 -8">
    <a-entity template="src: #link" data-src="#theta2" data-thumb="#theta2-thumb"></a-entity>
    <a-entity template="src: #link" data-src="#theta1" data-thumb="#theta1-thumb"></a-entity>
  </a-entity>
  <a-entity position="0 -1 -0.5">
    <a-entity template="src: #link" data-src="#theta3" data-thumb="#theta3-thumb"></a-entity>
</a-entity>

The placement of the spheres work, but it may take time to get the fuse links to work.

Update

Regarding A-Frame

I think I can get it to work, but it seems fairly unpolished. I’m thinking of going back to Unity.

Unity Options

Remember the Bugatti example using the FPS built-in libraries with Unity and WebGL? We used the THETA images from the contest. The example below uses an image from a temple.

https://codetricity.github.io/bugattivr/

Unity might be a better option for still images as well as live streaming if we’re dealing with movement inside of the sphere.

Unless we find a library for A-Frame that handles the FPS (First-Person-Shooter) movements that Unity has built-in, we’ll need to build our own movement controls for walking inside the sphere, which is going to suck up time. There’s a lot of stuff already built into Unity that will make dev easier.

Also, maybe we should focus on live streaming as we’ve never built a usable demo with live streaming yet?

First step could be to find assets to put into the live stream. I think the hockey rink is too complex.

Any other ideas?


#10

We’re working on an idea for a VR enabled real estate app. You can look through a whole house while sitting at your desk, using a headset that’s running from a browser on your smartphone.

On the real estate agent side, there’s a mobile app, build with JavaScript and Cordova, where there interface uses the RICOH THETA API to take and store pictures. The agent just presses one button in each separate room, and has to enter a small amount of information on where the picture is. (Probably a pulldown menu of rooms in a house.)

On the house buyer side, we settled on Glitch + A-Frame, as a great framework for getting VR content into a headset quickly. We built a small world, starting outside a door, moving into a hallway, and finally into an office. There are clear floating boxes, and a small circle where your headset is aimed.

Examples:

Here’s the A-Frame code we’re running in Glitch. This can be viewed at kankou.glitch.me - Comments welcome!

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>The Door by Team Kankou for MIT Hackathon</title>
    <meta name="description" content="Team Kankou for MIT Hackathon">
    <script src="https://aframe.io/releases/0.5.0/aframe.min.js"></script>
    <script src="https://npmcdn.com/aframe-animation-component@3.0.1"></script>
    <script src="https://npmcdn.com/aframe-event-set-component@3.0.1"></script>
    <script src="https://npmcdn.com/aframe-layout-component@3.0.1"></script>
    <script src="https://npmcdn.com/aframe-template-component@3.1.1"></script>
    <script src="components/set-image.js"></script>

  </head>
  <body>
    <a-scene>
      <a-assets>
        <!-- Sky -->
        <img id="frontdoor" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Ffront-door.jpg?1507420175349">
        <img id="frontdoor-thumb" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Ffrontdoor-thumb.png?1507422242431">
        <img id="hallway-thumb" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Fhallway-thumb.png?1507422716014">
        <img id="office-thumb" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Foffice-thumb.png?1507422838711">
        <audio id="click-sound" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Fopen_creaky_door.ogg?1507423206045"></audio>
        <img id="hallway" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Fhallway.jpg?1507420180797">
        <img id="office" crossorigin="anonymous" src="https://cdn.glitch.com/0a206311-a2e9-43c1-bb60-922daf36091f%2Foffice.jpg?1507420185497">


        <!-- Image link template to be reused. -->
        <script id="link" type="text/html">
          <a-entity class="link"
            geometry="primitive: plane; height: 1; width: 1"
            material="shader: flat; src: ${thumb}"
            event-set__1="_event: mousedown; scale: 1 1 1"
            event-set__2="_event: mouseup; scale: 1.2 1.2 1"
            event-set__3="_event: mouseenter; scale: 1.2 1.2 1"
            event-set__4="_event: mouseleave; scale: 1 1 1"
            set-image="on: click; target: #image-360; src: ${src}"
            sound="on: click; src: #click-sound"></a-entity>
        </script>
      </a-assets>

      <!-- 360-degree image. -->
      <a-entity rotation="0 -110 0">
          <a-sky id="image-360" radius="10" src="#frontdoor"></a-sky>
      </a-entity>
      
      <!-- front door -->
      <a-entity id="links" layout="type: line; margin: 1.5" position="1 -1 -1">
        <a-entity template="src: #link" data-src="#frontdoor" data-thumb="#frontdoor-thumb"></a-entity>    
      </a-entity>

<!--       position of hallway -->
      <a-entity id="links2" layout="type: line; margin: 1.5" position="1 -1 -4">
                <a-entity template="src: #link" data-src="#hallway" data-thumb="#hallway-thumb"></a-entity>

      </a-entity>
      
<!--       position of office -->
      <a-entity id="links3" layout="type: line; margin: 1.5" position="4 1 -8" rotation="0, 0, 0">

      <a-entity template="src: #link" 
                  data-src="#office" 
                  data-thumb="#office-thumb">
      </a-entity>
      </a-entity>
      
      <a-entity id="links2" layout="type: line; margin: 1.5" position="0 -1 -8" rotation="180, 0, 180">

      </a-entity>

   <!--   <a-entity position="0 0 8">
          <a-sphere color="yellow" radius=".1" id="theta3-thumb"></a-sphere>
    </a-entity>
   -->   

      <!-- Camera + cursor. -->
      <a-entity camera look-controls>
        <a-cursor id="cursor"
          animation__click="property: scale; startEvents: click; from: 0.1 0.1 0.1; to: 1 1 1; dur: 150"
          animation__fusing="property: fusing; startEvents: fusing; from: 1 1 1; to: 0.1 0.1 0.1; dur: 1500"
          event-set__1="_event: mouseenter; color: red"
          event-set__2="_event: mouseleave; color: black"
          fuse="true"
          raycaster="objects: .link"></a-cursor>
      </a-entity>
    </a-scene>
  </body>
</html>

EXTRA TIPS: If you want to show are general environment with the THETA, taking pictures at eye level provides a very familiar point of view. We used a tripod (careful, tipping over and scratching lens is a common problem) and a extendable monopod. (See picture below.)

Also, suggest setting the THETA “Shooting Method” to Self-timer and 10 seconds, so you can step out of the picture.


#11

Custom mobile app to take pictures with the RICOH THETA V WiFi API

Planned Features

  • Pre-defined common room tags with “one-press” picture tagging and tagging
  • File name becomes frontdoor.jpg or office.jpg or hallway01.jpg
  • Create custom thumbnail menu image in-app based on tag
  • Images automatically sent to central location using Internet

Not Planned

  • navigable 360 image will not be displayed, though we may display equirectangular full image
  • will not create VR scene automatically. VR scene will still require manual setup. The app is designed to reduce the time taken for web development and is not fully automatic
  • will not work on iOS. Only Android is supported
  • not using GPS or other accelerometer or positioning data in this phase

Benefits

  • reduce time and training for real estate agent taking pictures
  • reduce time for web developers to build VR scene

Other

This is a first step to improve the efficiency of real-estate agents in creating a VR scene for their clients. As there is still manual creation required, the pictures can be sent to people anywhere in the world to reduce costs of building the VR scene. Only HTML and A-Frame knowledge is required.


#12

This version with equirectangular image displayed below main button set.

@jcasman let’s meet at 11am PT tomorrow to discuss next steps and how to finish the project.

Top Priorities

Real Estate Buyer Experience (headset experience)

  • delete extraneous menus in each room. Need to manipulate each sphere in JavaScript

Real Estate Agent (Picture Taking) Experience

  • add settings gear at bottom to adjust main camera settings with API
  • implement image transfer system from phone to web site

#13

11am PT sounds great, talk to you then


#14

About a year ago, we built this demo using an Android emulator.

The primary differences between the old demo from a year ago and the current demo:

mobile app for real estate agents

  • running directly on the mobile phone now
  • added image preview on mobile phone
  • added image preview on mobile phone
  • better understanding of real estate picture taking process and now attempting to make the workflow more efficient (reduce cost of VR scene production)

A-Frame Scene

  • added more control of menu placement
  • scene designed for headset navigation. previous version relied on web navigation
  • building concept to link rooms together

#15

using glTF for models

Installed collada2gltf on my Linux workstation

About to break for lunch.

Downloading armchair from TurboSquid, Collada formt.

First attempt to import into scene did not work. Have armchair in blender now.

Attempting to load 4 files into Glitch.

Getting this error message. Will move to another server.

Some progress. That white thing in the lower left is the chair.

Have this error message, so will try and get the colors to work.

I inserted the chair into the Oppkey world headquarters office.

Rotated and put on stack of boxes.

Modified mobile app to support new market for “Corporate office tours for new recruits”


#16

Tried several different ways of improving the experience for our VR real estate users. Adding text to the first sphere - an introduction with instructions - seemed like a good way to start users walking through the tour. I found ways to add text but was not able to successfully add the components needed:

We settled on wanting to brand the tour more directly. We imported our logo - Oppkey - and could do more to make it useful as a tour of an office or other location that you’re showing off, perhaps as part of an interview process, and showing where they will actually be working will help recruit candidates. If so, company branding could possibly be an important item.

The Oppkey logo now guides the viewer to the office:


#17

Nice use of that logo. I think we have a solid entry for the Unofficial category of the hackathon. I learned quite a bit during the virtual hackathon.


#18

Here’s a presentation of our full project, submitted 13 minutes before the deadline!

MIT Hackathon Corporate Recruitment Tours.pdf (1.3 MB)