Hacker News new | past | comments | ask | show | jobs | submit login

This is essentially putting a Kinect in your phone and hooking it up with hopefully high-level APIs. It may take 2-3 years to make it into regular phones, but when it does it will be huge. Apple's acquisition of PrimeSense (the makers of the 1st gen Kinect) means they're also working on this.

To give you a real-world example: when I started BarSsense (http://www.barsense.com) the core problem was tracking the path and velocity of a weightlifter's bar. I bought a PrimeSense camera because it can extract a lot more data and with greater accuracy out of an image than a regular camera. After some prototyping, I decided to use a 2D camera and deliver the software as an app because I thought wide distribution and ease of use was more important than the fidelity and correctness of the data - ie, the "worse is better" approach. When these cameras make their way into regular phones, "worse is better" will suddenly become "better".




Wow, this is an amazing application. I love olympic weightlifting, and seeing somebody build a training aide using computer vision in an app makes me feel like living in the future. What a nice contribution to this beautiful sport.


This. I hadn't seen this app before and as I soon as I saw it I clicked "Install". Lovely stuff. Look forward to putting it to the test tomorrow and the foreseeable future.

As an aside, do you expect to monetize this in someway, or are you just doing it for the good of man?


Thanks to both of you, I hope you find it useful and let me know if you run into issues.

I'm planning to monetize and I have a few ideas, hopefully one of them works out!


Very neat. Making the screenshot animated might give an even better idea of how it works and make the page a little more dynamic/involving. Do you have any plans to have it analyze the trace(s) and give the user suggestions on what to change?


Very cool. Did you do all of the coding and heavy lifting on this app yourself? It certainly raises the bar in this area! Hopefully it didn't weigh you down whilst developing it.

Sorry, today is clearly Pun Day.


You've used all of the puns :c


This may be nice for certain niche users.

But the trend I see is companies telling us to regularly buy stuff we really don't need (and obviously throw away our "old" solutions). The world has way more pressing issues than yet-another-gadget. And the planet's resources are not unlimited.


I'm sorry that I'm not able to solve world hunger and world peace. Not sure why that means I cannot take it upon myself to hypothetically build a mobile computer-vision-powered app? Please enlighten.


It doesn't mean it. It just means that if enough people think the same way, the world is doomed.

Or, if not doomed, it's not getting any better in areas that matter.

(People laugh at words like "doomed", assuming everything will be as it was when they were growing up. For some lucky ones that's true. For others the worse happens, like a financial collapse or a world war, and then they "knew it all along it was going to happen").


> It just means that if enough people think the same way, the world is doomed.

True in the sense that if "enough people" believe the Internet's favourite scare-story of imminent-financial-collapse (growing in popularity ever since Y2K) then it will indeed by necessity finally happen ;)


What "scare story"? Financial collapse already happened in 2008.

People paid a trillion in the US alone, out of their pockets, to ameliorate it (plus close to another trillion they lented to Detroit). And tons of middle/working class jobs are not coming back in the foreseeable future.

And that's the US. For some European economies it is even worse -- they got from 30% unemployment to double the suicide rates in 3-4 years time.


That sure was a big crisis but the word collapse for me implies total breakdown == after something collapsed, it no longer exists. The "financial system" still exists seemingly, even if arguably not in its best shape ever since.


You're right that the science and practice of morality hasn't moved that much over the past 50 years while the science of technology has jumped by orders of magnitude, and that this is a problem that needs addressing. That doesn't mean we should stop evolving the state of the art in technology though. This is not an either/or situation. Ethical science relies on the advancement of technology, and vice versa.


I agree that the practice of morality seems sort of neglected in the western world.

I don't see how ethical science relies on the advancement of technology, though. I'm not even sure what ethical _science_ is, to be honest.


One fairly obvious application of that is to have motorbikes that can act like Google self-driving car. If we replace most cars by two-wheels (say, with a roof for confort under the rain) we divide to a third the oil consumption for transport, i.e. a third of world’s oil spending.

That's 20% saving on global non-renewables. “Niche”?


I don't understand the application beyond gaming. The other applications discussed are basically shopping.

Essentially this would make it much easier to represent the physical world digitally. But what use cases does a consumer or the average phone user have for digital representations of the physical space around them, particularly given that the user is already aware of the the physical space around them? How can this sort of digital device extend our ability to interact with physical space?


* Mapping the physical world

Ever wanted a floor plan for your home? Just wave your phone around.

* Visual annotation

Add direction overlays, see the plan for a play your team is executing on the helmet HUD, highlight "dangerous"(weaving, too fast, whatnot) drivers on the car HUD.

* Integrate sensor data to extend human perception

Add an IR overlay. Sample sound across the room, do a volumetric display of noise levels. "see" the strength of your WiFi signal.

* Image post processing

You have a 3d map of an area, plus pictures of all textures - rearrange to your hearts content.

* Alternate Reality

Completely change the look of the world around you, just because you can. (Semi-useful application: Interior decoration. See that couch right in your living room before you buy it)

There are tons of applications there. It mixes the "reality" of physical space with the malleability of the digital space.


I think I'm a neo-luddite.


I don't think you're a luddite. With any technology like this, it's important to ask: what applications does this really enable and will those applications simply be a waste of time.

This is an unveiling of a technology but the "applications" they show in this video are probably a waste of time to most users. Google have not shown a killer app that uses this tech. Maybe they're working on something (they hint that they may be planning to integrate indoor mapping into Google Maps which might be interesting) but they're not showing it in this video.

That doesn't mean the tech is bad. But we haven't seen enough to judge it as useful to end users.


Google has acknowledged that their vision of where this is going is, at best, partial: "While we may believe we know where this technology will take us, history suggests we that should be humble in our predictions. We are excited to see the effort take shape with each step forward."

I liked this part.


Google Maps already has indoor maps.


Yeah but they even said in the video that they struggle with indoor navigation and positioning.


This has applications even for, uh, "normal" people. For example, online real estate portals. People who are house-hunting would probably appreciate floorplans or a 3D tour. Agents, landlords or property developers would be able to easily create those with their mobile device.


>Agents, landlords or property developers would be able to easily create those with their mobile device

But they won't! It is very difficult to get those people to post more than a low res picture of one room online when they can very easily walk through the house with a video camera, for example. Even large apartment complexes have 1-2 pictures on their website and call it a day.


I'm currently house hunting (in Sweden) and most real estate agents go for full on visual overload. Everybody has at least 30+ 'artistic' photos. Most also have nice videos of the house on their site. Some even have clever 360 panorama thingies. I basically never watch them as they add nothing of value.

The single most important thing, for me, is a nice and correct 2D floor plan. Add to that 5-10 well chosen photos and you're done on the visual front as far as I'm concerned. If you haven't convinced to at least go look at the house with that, no video or interactive 3D model is going to change my mind.

The big problem is that the overlap between the information I consider important and the information the real estate agent is eager to share don't overlap all the much


Indeed. Panoramic photo VR tours were also gonna be the next big thing for real estate (remember QTVR?) but that only made sense for one-of-a-kind high end properties. Everything else is a commodity.


Nail in the head.

I have chatted with one of the agents about how do they work, and no surprise - they lend out everything by phone, because "uploading pictures to website takes 24 hours."


I used to work for one of the largest real estate portals in Europe, and people seldom watched the video of a property (if available). We also added floorplans from floorplanner.com and that converted way better. Now 25% of the listed properties has one.


To be honest, 25% is a very low number for such an essential piece of information about the property. Do you mind sharing how often people look at floor plans on your portal?

I'm a Russian living in the UK and it still surprises me that what Russians consider to be a viable information about property (gross internal and net internal area, kitchen area and a floor plan) is so rarely present on the UK property sites. "Lovely 2-bedroom" is all you normally expect.


That's because 99% of UK housing comes in a single standard size --- `XS`.


Do you see a lot of agents using Photosynth to give 3D tours ? It has been out for 4 years.

Sure with embedded in a smartphone would be more convenient, but I'm sure agents have some cameras when they get in houses.


I pretty much stopped reading when they were going through the possible applications. Find my way around a super-store, shit I'd rather just stay out if at all possible.

In the meantime, I'm sure this will revolutionize something. It always does. I'm just in your same position ... a neo-luddite.

Also, I would much rather have a space elevator than a 3D mapping phone.


The 3D mapping phone gets you the space elevator, because you need AI lab-assistants / engineers / scientists to build the space elevator.


It's not really an either/or situation


hah! That's the word I was looking for.


Since they want to "map the physical world", Google should buy the Euclideon guys already (Unlimited Detail Engine, Geoverse, etc):

https://www.youtube.com/watch?v=Irf-HJ4fBls

And then release it for VR applications/games (one can hope).


I thought they were investment-scamming techno-cons who had never been releasing video demonstrations for years without ever releasing a product.


The technology is amazing, but honestly these applications don't really seem that valuable, especially to the average user. Facial recognition is the main application of machine vision I can think of, but even that has limited utility. And I'm not sure if it will actually benefit from this technology.


What offices are in the second floor of that building across the street?

Is there something special about the house I'm pointing my phone at?

Who owns the daycare centre I'm pointing my phone at? Have there been any licensing / regulatory violations?

Paint a coloured path on the floor or wall to guide me to the dentist's office.

I'm old and can't see very well. Give me verbal directions to navigate (indoors) to get to the clinic.

I've got a wet spot on my basement ceiling. Highlight the outside of the house where water might be entering.


Any mechanical engineer who's had to take a 3D object and model it in CAD will love this.

I've been toying with the idea of such software in a phone for a while but never bothered exploring it because I lack the technical chops. There's a piece of software I've previously used called PhotoModeller [1] that allows you to calibrate a standard digital P&S camera then use a bunch of shots of the scene from various angles to build a 3D point cloud of it. Given that you can know the lens of an iPhone to a pretty close accuracy, I was thinking that you could build a similar application straight into the phone that then could upload 3d scans to dropbox. It'd be invaluable to field work.

This takes the above idea and loads it with steroids. I'm really excited!

[1] http://www.photomodeler.com/index.html


This is called photogrammetry - check out Autodesk's 123D Catch iPhone and web app, it yields some pretty good results if you take enough evenly lit photos with decent overlap.


Try this: http://seene.co/

It does just that and even let's you post the scenes.


It's a path to huds, in the Deepness in the Sky sense.

http://en.wikipedia.org/wiki/A_Deepness_in_the_Sky

You need situational awareness in a device to start using it to paint data onto the surroundings. Once you can do that, a whole world of applications unlocks.


Seems like this is much more likely Google's goal. Couching their arguments in ways the consumer will benefit left me feeling woefully unimpressed with the possibilities.

But clearly PK Dick's self-aware advertising will not be possible until such ads can distinguish a human from a column of marble or a dog.


This reminds me of William Hertling's 'AI Apocalypse' trilogy, specifically the last book 'The Last Firewall' where brain implants enabled depth-specific metadata info on everyone in vicinity.


It's machine vision. I mean the possibilities are endless. Your phone can see now. This gives the device way more information about an environment then a camera would. I can put this on the dashboard of my car or on the handle bars of bikes, I could leave this thing on at home while I'm work, I could use it to as a click counter at a theater to measure capacity, I could navigate pitch darkness much better, etc.


I think of it as similar in its final goal as at&t/olivetti research's "The Bat" ubiquitous or sentient computing project. These links provide some good examples of stuff they did with that: http://www.vs.inf.ethz.ch/events/dag2002/program/ws/Beresfor... http://research.microsoft.com/en-us/um/people/shodges/videos...

in short, the physical world can be treated as a user interface to computers.

There's a large technical distinction of course; in the case of the bat, getting the technology up and running to do this was sophisticated enough that the video i linked to spends a large portion explaining how it works. they used centralized computers and centralized sensors (echo-locators) to figure out where the device you are holding is within a known environment. Nowadays, we can put so much compute power and so many sensors in a device you are holding that it can figure out its own environment instead.


You don't understand it because it's beyond your comprehension, Morty. Because the world is full of idiots that don’t understand what’s important. And they’ll tear us apart, Morty. But if you stick with me, I’m gonna accomplish great things, Morty. And you’re gonna be part of them. And together, we’re gonna run around, Morty. We’re gonna- do all kinds of wonderful things, Morty.


It will let you see through walls.

Imagine being in a complex refinery, factory, etc., where tons of infrastructure is hidden behind walls or a couple of rooms away. Just hold your device in front of you, and peer through its virtual portal through the walls onto your hidden surroundings ...

Being able to know where everything is, along with an overlay of real-time status, etc. will be valuable.


> But what use cases does a consumer or the average phone user have for digital representations of the physical space around them, particularly given that the user is already aware of the the physical space around them?

For the user? Probably not much at the moment, if ever.

For a company whose mission is to collect, aggregate, and extract monetary value from every last piece of data in the world?

For a government that's interested in extending its awareness?

Priceless.


Example app:

You can see crowd moving on streets, where each person is a box (or a 3d avatar)... than if other people have installed the same app as you do.. it will show a icon on top of the box representing the person (the phone sends a signal IR, BT whatever) .. or is geolocated by a central server that sync the position of everybody..

Than you can interact with those strangers on the street.. in the 3d box you see representing the person, it may have more clues about that person..

so you can send a message to a girl/guy you liked and ask to hang out with you.. for intance.. its like a people radar..

Can work in traffic too.. so you can tag people in cars around you..

You can create a game, and involve people you have tagged, and give each one a role.. like in a RPG

If you get feedback of the camera too, you can see people with 3d stuff on top.. like holding a 3d gun..or a secret message in your virtual shirt.. it would work like a magical glass

I had this micro idea, a year ago.. this is the technology to make it work.. feel free to use it to create something

just call me for a beer later :)


the blind. that use case is even demonstrated in the video.


Ocular implants are not the newest technology. The argument I think would be that a phone would be a cheaper alternative/more accessible. But I'm interested if the 'audio cues' that they mention in the video would be any better than the audio cues already present in most environments.


I'm only aware of wildly experimental implants providing like 200 dots of light. They aren't by any means a deployed medical technology.

This sort of system could provide audio cues in any environment. That's already a big step up from 'most environments'.


Combined with something like Tactus [1] and the environment can be represented in a tactile 3d map [2].

[1] http://techcrunch.com/2014/01/10/new-tactus-case-concept-bri... [2] http://www.solidterrainmodeling.com/


Well I'd imagine using computer vision that can identify door handles, elevator buttons (even so far as reading out the numbers on the display).


Personal transport. It makes what Google did with self-driving cars possible with far less stable transporters: motos, bikes, even small crates with wheels or even rotors in large logistic centrals. It can mean plans like drug-delivery using drones have a better technology in crowded places, like when approaching futuristic health centers that synthesize or store drugs. This is huge.


Imagine if you integrate this with a future version of NEST thermostat or an integrated lighting, you can create different environments in a single room and change it on the fly. An electrician working in a Hotel would be navigated to the right junction box. You buy a new car and the manufacturer provides you with a virtual manual to find new controls, may be even basic repairs.


One example would be better versions of the iRobot cleaners. Currently they rely on a series of semi random movements and aren't particularly smart, which contributes to not being very efficient, and rarely getting into the edges of spaces. If you own one you've probably moved your furniture to accommodate their strange behaviour.


Neato's robot vacuum cleaners already do this using Lidar.


wow did you post this on the wrong website. "I don't understand"


I think you missed a real opportunity to name your app "Do You Lift Even".


This is sick! I lift and I've been wanting something like this for a looong time.

I think the end game of this technology can help people without a coach learn to lift. From what I've read about how people learn, immediate feedback is extremely important, as in within a couple of seconds. I'd love it if this took bar path and velocity information, and put it into a machine learning system. The app could watch you in real time and immediately tell you whether it was a good lift or what was wrong with it. Lift your butt up faster, or lift the bar faster, slower or whatever. I think you'd need to sit down with a couple good coaches and have them classify what's wrong or right with several hundred lifts to get your training dataset—but I would love to pay for this product.


How is the progress for BarSense for iOS?


Yes, this please. My entire CrossFit box would start using this tomorrow if it were available on iOS. I don't know anyone with an Android.


Thanks a lot, the iOS version should be out sometime this spring.


Good app. But you misspelled the name: BarSense.


Nice app! Quick spelling check though:

You can focus on and isolate part of a rep to really understand how well "YOUR" performed


This is really cool. Does it compute power along the trajectory as well?

The problem I have is that I don't have a smartphone. Each cool app like this brings me closer to getting one though.

Can anyone suggest an alternative to this app that could work with video files? Or maybe something that I can stick to the barbell?


Wow. Great app. I would have loved this app back when I was into oly lifting.


I've been looking for something like this! Downloaded.


Cool app! Downloaded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: