1) Cleaning the data as it comes in rather than in batches so we can use it sooner, invalid data is discarded, outlier detection, normalizing inputs etc....
2) Warehousing of the data with proper indexes so you can perform some advanced queries on unstructured data
3) Some data is sent in bulk at the end of day, some of the data is streamed in fire hose style. How can we preprocess the fire hose data so that we don't have to wait until the end of the day to parse it all.
4) Oh and all of this data is unstructured and comes from 75 different sources.
Soon the average hedge fund will have more people just cleaning and managing data than they do in quantitative research, dev ops, software development and trading.
Oh and lots of the data is considered proprietary so while AWS/Azure, etc is fine, sending it to a third party to process is not.
TL/DR
Help me, I'm drowning in data. How do I get the time from when I acquire data to when I trade based on it down to a reasonable time frame, where reasonable is closer to hours rather than days/weeks.
Great questions. I worked on similar problems in the weather/ag space for a few years, trying to minimize the time between data was acquired and data is ready to inform a decision.
We threw every rule out the window in the name of performance _when fetching raw data from external sources_. So we had weather station networks, NOAA forecast runs and NASA satellite data in a workable schema in our shop way faster than average. Mix of C, PowerShell, Perl, and the nonstandard parts of T-SQL, highly parallelized, tricky but fast.
After the "workable schema" was established, the rules came back and we acted more responsibly. Smart instead of clever.
Ran this stuff all day long, getting every piece of data asap. Things that can only be calculated with a full day of data we poked and prodded the meteorologists to express in "partial aggregates", which to me were just like the map steps before an EOD reduce.
Took a lot of mutual understanding and iterating but worth it in the end. When the ultimate data source (satellite or radar site for us) posted its last hour of data, we were 95% done with the day's computation work. We do our last step, publish our numbers, and bam, Our ag clients have this stuff a day earlier than they are used to.
It's a native streaming platform so your data will be cleansed, processed, scanned for outliers event-by-event rather than in batches. We have dozens of streaming connectors IT/Enterprise/Web data sources. We also support initial load for your firehose data. For unstructured data, we have support for RegEx based parsers.
Shoot me a message if you have any more questions. We have many big name users in Aerospace, Banking, Device manufacturing, and Logistics industries.
Interesting. We worked in this area for many years, did the 'startup' in this area, now acquired by Intel, open sourced the product (ob plug: https://github.com/01org/hyperscan ) and I would never claim that we are anything near 'solving' regex.
More like "mitigating" or "occasionally getting regex slightly more right than some other solutions". There are many different approaches to regex and all seem to focus on different parts of functionality (RE2 focuses on quick compiles and simplicity, libpcre has 'all the functionality', we're about streaming + large scale + high performance if you can tolerate long compiles and lots of complexity). A number of new projects are trying very interesting approaches, like icgrep and the Rust regex guys.
I have seen local companies working for months/years to finally use his BI package but the trouble at the step 1 is big (and also, to put the data in a "nice" schema).
The problem is that enter in this space is hard. Years ago I was at a company that have a niche product (in foxpro) for this kind of task, and I have dreamed about build something like this based in my experience, but get the funding for this kind of "boring" task is hard (more in my country, Colombia).
P.D: If wanna help, we can talk. I can't give a magical solution but at least I find this kind of "boring" jobs compelling ;)
Check out Holistics.io (disclaimer: I'm a cofounder). While I can't say we can solve all your listed problems above, we provide enough tooling on top of your DW to help you pre-process (clean, aggregate) them.
I don't know much about Paxata but I think Trifacta are well-regarded in industry and academia. Trifacta founders worked on research / open-source-? project Data Wrangler http://vis.stanford.edu/wrangler/ and turned it into Trifacta.
There are good open source options for each step here - is the solution you are looking for just a UI and easy install process? Or would your ideal solution make all of the decisions for you - data structure and format, which data is/isn't valid, what output options are possible, managing server resources, etc.?
I'm not very familiar with the open source options since after many years of coding this by hand, I work with what I know. I am a developer that works with data, not a Data Scientist, so I don't really know the lingo and whatever hipstery terms people are using these days. I will answer to the best of my ability, though (mostly for my sake, who knows if this will be useful):
Cleaning:
Open Refine seems to be the best product in this category. I haven't used anything but my own tools to do this before, so I can't really offer any advice.
Warehousing:
My understanding is that this is just a fancy way to talk about a database with a schema designed for analytics. There are many open source databases which do this very well, the one I use being Cassandra (and/or KairosDB), though it is also likely the one that is hardest to use. For a beginner, you might want to refer to this SO answer: http://stackoverflow.com/questions/8816429/is-there-a-powerf...
Data processing/collection:
This is something that is incredibly dependent on the data sources, so I likely can't tell you anything that will help. Most of my data sources I've worked with have been internally sourced log files, messages from ZMQ, or CSV data - you might be working with something far different though, since there are lots of public data sets and such which are common. Ideally, this would be integrated into the tools that you are using to clean the data, but I don't know if that exists.
Handling input from many different sources at different rates is not a very hard problem to solve if your system is build correctly - you could for example run a daemon for each data source which will populate the database when there is new data available, then send a message off to the processing engine, which will integrate the data into whatever reports you are running.
Specifically for a use case of a hedge fund, the reports could be triggered by a message which is sent when the new data is available, and processing could be done in parallel in Lambda or similar dependent on need to get a nearly instant return, enabling nearly real-time reporting.
In my current job I use Informatica Cloud, which either by itself or in combination with other Informatica products can do these things. I have two main complaints about it:
1. The UX is subpar. It insists on running in only a single tab at a time, and attempts to open multiple tabs will instead override whatever it considers to be the master tab. This is a huge pain, because I often need to have a mapping workflow open in one window and some other relevant part of the application open in another. Instead I have to save, go find the thing I want, and go back. Another problem is that when working with data sources containing tons of fields, there's no easy way to search.
2. It offers an expression language to perform some computational tasks, similar to what you'd find in Excel, but it's hamstrung by a poor UI and a limited amount of functions. The built-in editor for expressions is really poor (see Tableau for an example of a great editor for a simple Excel-like language; it even has type linting) and, unless I've misunderstood something, you can't declare any variables so you end up with huge nested expressions. There aren't many functions available, so something as simple as removing whitespace ends up as lstrip(rstrip(foo)). In combination with no support for statements (or at least a let expression like in lisp) this makes any nontrivial data munging completely indecipherable.
I've looked around in this space and it seems like there are a variety of products, but the supplier of our main CRM will only support Informatica Cloud. I think that a company that can offer a product that does what you've said but makes a serious effort at UX could cause users to revolt and demand to use it! I know the joke is that Slack is just a pretty IRC with better UX... but that's exactly why it has become so successful.
In terms of data munging, take a look at Microsoft's Power BI. It's visualization software but it has a nice data munging mode that, crucially, keeps track of all the changes you make and displays them in a linear format. This is great for getting a quick idea as to what was done with the data and is essential for doing reproducible data analyses. Unfortunately, Power BI also suffers from poor UX in insisting on tiny fonts and gray-on-gray palettes that are totally unreadable to anyone over 30.
What I envision a solution to be like, would be something like an configurable/codeable OpenRefine (was Google Refine) with streaming ingestion/extraction, with a validation engine/parsing engine (something more elegant than regex, but you can drop into that if necessary) and maybe a pluggable event processor (i.e. a Spark or Flink). I would love to work on such a problem, and solve it.
One thing you don't really mention here, but it's mentioned a lot in the comments, is the data extraction piece. Is data extraction a pretty solved problem at this point, and it's really the intelligent cleaning, transforming, then warehousing / analysis that's the unsolved issue?
Omg, so many sales pitches. You should figure out which of those were automatically generated by someone who's bot is crawling HN and using NLP to find posts like this, and then hire them. There's basically 0 chance that isn't happening...
I've been working on similar issues, and developing towards solving them. I work with mixed schemas, handle user defined processes inline and allow you to gather stats and show everything as it streams + accommodate a workflow. I'm only 1 month from launching my beta and want to give away licenses to people who can use it and give me feedback. here is the landing page for it: http://ohm.ai OH and it uploads nothing... it all runs in your browser anthony dot aragues at gmail dot com if interesed
Palantir doesn't really have a product, they write tons of code to put everything together and make it look seamless from the UI but they don't have anything drop in as far as I know.
I tried to evangelize THIS very problem with my previous company (a somewhat successful managed data infra service) for a year, it was AMAZINGLY difficult to make my executives even understand the problem and its magnitude.
Chollida. I have been working with a startup in Seattle tackling these very issues. Super great team and great software, please get in touch and try it out! datablade.io
Have you looked into Snowflake? Seems like their solution satisfies most of your requirements, including native ingestion of unstructured data. The one caveat is that all source data must first be loaded to S3. https://www.snowflake.net/
Take a look at Apache Merton's architecture. They seem to have dealt with a number of inherent issues in the problem putting together data processing pipelines of open source components and ensuring they work together. The project is active but incubating right now.
Great question, but I don't see this much different from other industries or companies with millions or billions of dollars at stake. They don't all roll their own software inhouse, and most companies (hedgefunds or not, small or even large) simply cannot afford to for financial and commercial risk-management reasons unless they can justify the software truly being a core competitive competency.
The same line of inquiry has been evaluated for most 3rd party software that companies rely on. For this specific instance of data collection and cleaning, I'm imagining it's not going to be a much different calculus, although perhaps you'll see a higher percentage of firms choosing to roll their own if they have the chops and pockets (e.g. Two Sigma, Bridgewater, Goldman Sachs, etc.).
I will note that there are commercial mechanisms firms could try to implement to try to limit the downsides in case something like this happens: warranty & damages provisions, and insurance are two come that spring to mind. I'm sure there are numerous other considerations in the age-old "build or buy" cost-benefit analysis.
On a smaller scale my EasyMorph might be of help. It's a lightweight ETL and you can do with it way more than with data preparation tools http://easymorph.com
I've actually been working on something like this for a while now, and found your comment about proprietary data interesting. Would this mean that hosting this data in a third party server is out of the question for you? OK with NDA?
A family member is a lawyer in the Worker's Comp, SS, and Family Law space. THE software for lawyers in this space is called A1 Law. It solves a lot of real problems lawyers in that space have (form letter generation, calendar integration, case management)... but it's so slow to use new technologies. They advertise PalmOS integration. My family member has to have their own server in a closet running the server version of this so his team can use it! He has no idea how to manage a server, it's absurd that he has to.
Everyone I know in law is dissatisfied with every part of their tech stack. If someone could come up with an integrated SaaS solution, and be SUPER careful about compliance... they would be printing money.
I would strongly encourage people to think twice about trying to sell software to lawyers.
It's a Sisyphean task. They are, as a rule, extremely anti-technology and conservative. At a previous startup, we had built software which was saving customers many hours a week—yet it was still an uphill battle to get paying approval.
If even after all the warnings in this thread you really want to build legal software, focus on disrupting lawyers instead of selling to them.
In general, law and media are two of the worst fields for technology.
I worked in a tech department of a rather large law services organization. There was a desire to maintain certain inefficiencies so that more hours could be billed to clients. If it could be done in 25% of the time, that's 75% less they could bill clients for 'attorney time spent.'
See how well pitching 'do it faster and make less' goes over.
Can't they still bill whatever they want? If 10 hours of proofreading, form filling, photocopying, and filing would be billed for $1500 (10 hours x $150 an hour), couldn't they still charge $1500 if the software took 80 milliseconds to do the same job?
Oh, the clients expect an itemized bill? Simple, the above charges would be "10 legal intern equivalent hours @ $150/hour". If a client questions it, the lawyer can explain that they are now using a very expensive piece of software instead of interns and attorneys for certain tasks, but felt it was an ethical obligation to quote the cost in a human understandable way. Turn the arbitrary pricing into a positive!
And of course your software should be able to quote all its tasks in these legal intern equivalent hours. This also leaves the lawyers hands clean since they can say that the software came up with hourly figure, not them.
I think the problem is one of mindset. From my perspective, even looking at accountants. Their industry and the clients they serve think that they are selling their "time", and not a service by itself. So, reducing the time it takes the lawyer/accountant to do something simply means less time being billed. It does not mean that they can now charge "more" for the time, as the cost per unit of that expert's time appears to be fixed by some other mechanism. Like seniority and years of experience, and not efficiency.
Perhaps bringing it back to a development perspective might shine some more light on it for us. Imagine you're a freelance developer and you've now developed (or bought) a fancy piece of software that allows you to do plenty of code-generation and reduce the amount of menial database layer code that needs to be written. You're now say 1.5x more efficient at delivering a product. What are you to do then? I doubt many clients would agree to a once-off fee for usage of your fancy code generation tool, even if you phrase it as saving "4 intern developer hours", and charge appropriately. There is also probably a cap on the hourly rate they're willing to pay you. Either that, or you change to a per-deliverable or product pricing model.
Exactly. The legal industry predominantly uses hourly billing and making up "equivalent hours" would be extremely unethical.
It's part of why I encourage everyone I know (particularly developers) to switch from hourly to fixed-price billing. Any efficiencies you gain should belong to you, not the customer. (There's also the fact that I find a lot more people are willing to pay $10k for X than $250/hr for 40 hours.)
The problem with fixed price as a developer is that rarely are requirements exactly understood or detailed enough to actually be able to bid the job. "Export to PDF" ok.. no problem -- that then turns into: "can you add page numbers? Can you support A4 - and US letter sizes?" And you have scope creep-- one more little thing isn't reasonable to say no to, however it quickly becomes a death by a thousand paper cuts. Ok, so now you price in that enevitable scope creep, so now your price is much higher. "$5000 to export to PDF? That's crazy!" -- yes but I am anticipating the fact that you don't know exactly what you want. "But we do, we made it clear!"
You see how that goes. Project pricing leads to a guessing game. Billing hourly is fair for everyone, at least in software. If I am more efficient, I pass that onto the customer. I don't 'lose' money -- it usually results in more work.
Imagine charging $80 for some corn because I want to make the same money as if I had guys hand-picking and hand seeding and doing the entire farming process without machines. That corn only cost me $0.10 to produce but I am charging a price as if I didn't have modern efficiencies. I would sell a lot less corn and actually profit less due to both competition and price elasticity. People would look for alternatives to corn.
In software, not passing on efficiencies means that there would actually be a smaller market for software development. Imagine how bad the market would be for us if we wrote everything in assembly. A simple web site might cost $100m and there'd be exactly 5 people in the world building websites.
I did some fixed price work this summer for a project where I thought the scope was unusually well understood by both sides. About 3 months, 60k USD if done by a fixed deadline (yes - fixed scope, fixed price, fixed deadline!) and as far as I was concerned from the original spec I had it done within about 6 weeks.
Of course, I spent the rest of the project time politely asking the customer to sign it off and doing the odd freebie to try and keep them happy but mostly at home, not working and not wanting to take anything else on in case they turned round and said I'd screwed up somewhere massively.
Perhaps unrelated, but I still haven't been paid for all of it either. Still, if I do eventually get paid it all it will have worked out better than charging per hour.
That's why Scope of Work documents exist: to protect against scope creep.
If a customer demands additional features, you prepare a Change Order and say, "OK, here is how long it will take and how much extra it will cost."
After a while they learn discipline and stop asking for changes half-way (or more) into the project.
Here is another perspective: the vast majority of features I've build as part of Change Orders rarely, if ever, got used. Granted, I make sure all relevant stakeholders are involved in the creation of the initial Scope of Work. That way, there are no late-comers who demand changes/additions.
The way I think about it is that general efficiency gains flow to the customer, while my unique efficiency gains are mine. So if it might take the average developer 100 hours to finish something but I can do it in 80, then I should charge as though it took 100.
The problem with hourly billing is it very poorly aligns incentives. It actually discourages efficiency because the easiest way for me to make more money is to take longer.
Also, psychologically, most clients are not comfortable with the vast differences in appropriate pay between developers. Even in the worst case (where scope was poorly defined and/or I estimated poorly), I'm making more now than I ever did with hourly billing.
If you had a monopoly on modern farming, it would absolutely make sense to charge $40 for corn. You'd soak up all the demand (since you're undercutting the $80 hand-harvesters) while still having massive profit margins.
Getting good at scoping is difficult but by no means impossible.
I chose my accountant because I could record my spendings online. He can just generate my yearly reports on one click in his backend, which he bills €80 for, "per click". He gives better advice because he has more customers, thanks to this efficiency, he doesn't spend time on mundane stuff and spends most his time meeting customers like me who present him various problems to solve, therefore he's more accoustomed to problem-solving.
Comapanies in my coworking space switch one after one. One has gone from a ~6500€ yearly bill to ~3500€ (3 employees), while improving reportability.
Non-industrialized accountants are just as necessary as human cashiers: Not. Lawyers are a bit harder to industrialize.
Not saying something like this would be impossible, just saying what I saw. However, the line of reasoning could be slippery. For example, they could say "1000 lawyer hours at X dollars an hour." When questioned, the company says "we could of had them trace all the documents with a pen to copy them by hand. Instead we used a photocopier, so we're billing you for the awesome technology." Seems like it'd be a hard sell and possibly unethical (in another way than it already is). I do think things might be able to be changed in terms of mentality, though.
I think contracts (between law firms and their clients) use hours worked because they don't know upfront how complicated cases will be, how long it will take etc. It's not just for "understandable pricing". Your "bill whatever they want" suggestion is basically saying that at the end, the law firm can quote whatever price they want, and the client agrees up front to pay that.
Actually, I'm a lawyer and a product owner at a big accounting firm, so I'm pretty well positioned to discuss this topic. I have half a dozen solid startup ideas that I'm tempted to take and start running with right now. Documentation management alone is in desperate need of innovation. I build toy apps in my spare time that cut thousands of dollars of charge hours young associates waste on bullshit tasks, just within my one small practice group. The whole legal software industry is a joke.
But here's the real problem for anyone looking to innovate in that space: the customers. Lawyers are as a rule anti-technology, slow to adapt new techniques, and set in their ways. Worse yet, they just bill their clients for their shitty software like Lexis or WestLaw, so they aren't even personally motivated to reduce costs.
> cut thousands of dollars of charge hours young associates waste on bullshit tasks
Doesn't this take money away from your firm? It is only when firms are competing on cost, time or client recognized quality that they will institute better workflows via software.
Depends, there are billable hours e.g. time spent with customers and non-billable hours e.g. administrative tasks not specific to any customer such as payroll, transfer of knowledge between work colleagues. So by targeting reduction of non-billable hours, you would have a compelling reason to sell to firms who bill by time.
That's pretty much true in general. Our fees are dictated more by the market than our actual costs. So if we were billing fewer hours, we'd just raise the hourly billing rate for our associates.
From our perspective in the national standards group, we would actually want our associates to just spend more time on value added activities. Instead of wasting time organizing PDFs of exhibits and monkeying around in spreadsheets, we want to them evaluating the relevant legal and technical tax issues. So it's not precisely cost control that is the primary concern, but quality assurance.
No, because we are in fact competing on cost and client-recognized quality, and to a lesser extent time. Plus our fees are driven more by the market than by our actual costs, so if we billed fewer hours, we would simply bill at a higher rate to reach the same expected fee while still maintaining our position in the marketplace. Or if we could reduce our fee, we might be able to win more market share.
The pejorative term in the industry for padding billing with useless busy work is "fee justification," which really shouldn't ever be necessary. Especially in my practice area, because there's always more work that can be done to flesh out our deliverables, which in turn makes them more effective for convincing the IRS (or state equivalent) or an appeals judge. When I say I've cut thousands of dollars of charge hours, we didn't simply stop charging those hours, we allocated them to more useful, value added activities.
Right now, staff spend far too much time inefficiently manipulating data in Excel, manually organizing exhibits, and a variety of other mundane, low cognitive effort tasks (I can't really specify what kinds because that would essentially doxx me). They feel productive, they look productive, and they meet their charge hour goals. And it allows them to procrastinate on the more mentally taxing work, like evaluating the relevant legal and technical tax issues, which in turn detracts from the quality of our service. Our clients aren't paying us to be extra-expensive outsourced spreadsheet monkeys. They're paying us to eliminate uncertainty about complicated legal and tax issues. So freeing up engagement budget and the staff's mental bandwidth to focus on the high value added cognitive services is tremendously useful in improving quality.
And in terms of time, we compete on that in some cases where there's an audit, exam, or appeal deadline and the client came to us late in the game. But that's an edge case and relatively rare. Certainly having a reputation for being quick, efficient, and timely wouldn't hurt our market position, though.
This is great news! I did a PoC for a document search system for discovery, OCR and full-text search. We deemed it too hard of a sell for law firms. Maybe the landscape is changing.
A lawyer friend of mine told me his firm doesn't use software to check all the "hereafter referred to" are bound to their template and vice-versa. They instead have to print the document and go through it with a highlighter.
The firm charges their clients on an hourly basis, so they don't really have an incentive to be more efficient.
I feel like charging on an hourly basis is a common pattern in many industries that opens the doors to competition from startups with different pricing structures...as long as the startup can do everything in a manner compliant with the existing industry.
Logojoy, for instance, is an example of a service that supplants human labor with a single "good-enough" deliverable at a low price, and does so in a fraction of the amount of time. I imagine this would be much more difficult in legal settings, but LegalZoom seems to be alive and kicking, so it must be possible.
To your second paragraph, I would add that it's hard for customers (and lawyers) to figure out what is "good-enough" in the legal setting. I'm a lawyer and there's a lot of stuff you can find on the internet that I personally think is good enough (I would use it in my personal affairs because the risk of the missing edge cases being an actual problem is slim) but I wouldn't be comfortable recommending it as a solution to a client because those missing edge cases are a real malpractice risk.
In the case of a logo, good enough is whatever the client thinks is good enough. In the case of a lot of legal solutions, good enough is often a murky risk/reward calculation based on legal concepts the client may not understand completely.
I still think there's enormous room for improvement, both in helping clients understand the concepts and the risks they're taking, and also in providing better automated solutions.
I think it could be done, especially if you could gather enough information to show people the likelihood of certain problems happening given their circumstances. The biggest problem is that without software to do the heavy lifting, you're spending so much time talking to the client that you might as well be their lawyer. And then even if you save them money, their "real" attorney might argue against your advice or retrace all your steps at an hourly rate.
I'm sure that there are lots of legal consulting companies that do this for people and entities that consume lots of legal services but the real trick is providing it profitably to "unsophisticated" people doing a one time thing.
> I feel like charging on an hourly basis is a common pattern in many industries that opens the doors to competition from startups with different pricing structures...as long as the startup can do everything in a manner compliant with the existing industry.
That last step's a real doozy, though. Startups are a field that thinks "move fast and break stuff" is actually a good idea. That kind of thinking works when you're slinging viral social media and personal productivity services, but it is catastrophic when you try to move into an industry where your customers' lives or livelihoods are on the line.
Yes, it is insanely frustrating. I think I did fairly well in law school in part because I wrote a program to auto-format my cites, which saved me hours of mindless, awful, pedantic, irrelevant blue booking.
>The firm charges their clients on an hourly basis, so they don't really have an incentive to be more efficient.
While I agree that the billable hours system reduces the incentive to be more efficient, I don't think it removes it entirely. Otherwise lawyers would still be using typewriters to draft memos. In my experience, removing some of the inefficiencies frees up time and mental bandwidth to focus on activities which actually benefit the client. More time reading cases, researching, evaluating issues. And you can bill for that.
You hit the nail on the head, there is an incentive to automate legal services ala legal zoom but for lawyers themselves the more tedious and paper based the process is the more they can make.
Hourly billing is (very slowly) going away. Fixed fee arrangements will be king (people want predictability). So then a lawyer will want to be as efficient as possible. The legal industry is admittedly behind the times, but they do continue to move forward.
I have a family member who is a lawyer and who has a similar type of software setup. The biggest obstacle I notice in the legal industry is that many simply don't care. They actively dismiss software as being unimportant even though they rely on it every day. It's a very bizarre case of the legal industry hating the very industry that could help them.
Note: perhaps my experiences aren't representative of the industry as a whole.
This is exactly correct. As a lawyer and a software developer, I long ago gave up on the idea of selling software to solve the problems of lawyers and/or law firms. Lawyers tend to be terrible customers of technology, if for no other reason than that they have established completely backward incentives that reward inefficiencies and information deficits.
The only "legal tech" that can succeed (in my opinion) is the kind that eliminates the need for lawyers, but then you're up against a different problem: people who think lawyers are magical wizards who can invoke spells to keep lawsuits and regulators at bay. It's really hard to convince many people that they don't need a lawyer, even though lawyers and law firms are almost never accountable for the advice they give.
I agree with eliminating the need for lawyers for most things, but the biggest problem about it is that in small cities/towns (maybe big ones too, I just have no experience in that domain) judges and lawyers are "buddies". People with the exact same charges can get radically different sentences depending on if they have a paid lawyer vs no lawyer or a public defender. There's a public defender in my town who also has his own private firm, and it's amazing how differently the judge and DA respond to whether or not you hired him or the town did. If all of that isn't bad enough, you can see the judge, DA , and lawyers all making backroom deals and exchanging favors. And they do it fairly blatantly in my town. I've rarely seen an objective case and it's a shame because law is perceived as a "sacred" domain where objectivity rules.
If you're going to court, bring a lawyer. Courts are the domain of arcane procedures and common sense has no place there. My comments above refer to transactions, compliance, etc.
Probably smart to bring one to court with you but maybe not required for drawing up a standard will where the few assets you own should just go to your next of kin
Lawyers write human readable code that's compiled and run by a judge or interfacing APIs (institutions such as financial ones)
Your advice is akin to saying "hey you inexperienced coder, write some production ready code but don't test it and when the only time it needs to run, give it a try. Hope you don't screw it up! When there's another coder in the room who can claim 'oh no he meant to set my financial variable 100X not 10x' and can convince the compiler to agree with them"
This analogy falls apart pretty quickly. You can't compile legal work product and no one is accountable if it doesn't run, unless you're at the point where malpractice comes into play. Malpractice is really, really hard to prove, though.
But most judges are making subjective decisions and not just "running code". The US constitution is law; can you compile it into code such that a computer could tell you whether a particular piece of legislation was unconstitutional? If you could, why hasn't such a computer replaced most of the US Supreme Court?
I investigated doing a SaaS product in the legal space. One of the things I heard multiple times from lawyers is that they are more likely to buy a product if they can bill their clients directly. What you are talking about would not fall into that category. That doesn't mean you couldn't build a successful business, but I think it is important to understand that the market has different rules.
I've heard the same thing from multiple people attempting to build for the legal space. Lawyers won't pay for software to save time because that has a negative ROI for them. OTOH they may pay for tools that helps them provide more services, bill more time, find more clients, or decrease the chance of mistakes.
> OTOH they may pay for tools that helps them provide more services, bill more time, find more clients, or decrease the chance of mistakes.
But... saving time means they have more time to provide more services, accept new clients, and review their documents to decrease mistakes. Is the relationship not apparent in their minds?
That was done in the UK. I wrote the first working version for Legal Cost Finance, who offered instant credit facility "to make justice affordable to everyone". It took them 3 years to take off even when the whole case was that they literally brought bulk of pre-paid (!) customers to legal firms.
Can you explain what you mean? Letter generation etc is still usefull, I don't see what billing has to do with it. They can still charge what they want to.
I think he means pricing schemes are more simple when you go with a flat "200$/hour rate". Obviously it's shit for the client since they have no idea how many hours will be spent on the case, but that's not the lawyer's problem.
Sure. The purpose of a SaaS product would be to improve efficiency and save time. Given lawyers typically charge by the hour, they would lose money unless they could bill their clients directly for the use of the more efficient software to make up for the lost revenue due to saved time. However billing clients for legal software is not the norm.
You most likely program in vim or another vi successor. Possibly with a ton of configuration settings and plugins that weren't around in vi. Big difference.
Fair enough. It is vim -- I still refer to it as vi because I'm that old. And yes, I do take advantage of undo and syntax highlighting. I'm sure quite a few of the modalities I use over the years have been added (oh! Tabs, very important). But the point is, relatively old control interfaces can be very effective even after 30 years.
Lots of commenters here have mentioned parts of law practice that involve lawsuits, complex negotiations, and so forth.
Workers Comp, Social Security, Family Law, and elder law in general aren't as glamorous. Clients for those services don't have such deep pockets as the other corners of the law do.
It's likely that a good SaaS-based system with embedded knowledge of jurisdictional rules (in the US, federal and state rules) could be successful.
But the sales cycle for a new product? Getting early adopters? Prepare for some pain.
I always thought there should be a one year masters of law program - no path to practicing, but gives you the frameworks and knowledge to be a good consumer of law
That's interesting! A1 looks archaic. An interesting startup in this space is https://www.upcounsel.com. They provide document management, calendaring and the such but operate as a marketplace platform. Would absolutely agree that the barriers to entry (compliance etc.) seem to be a big opportunity to offer solutions in the law space. As an interesting note: It looks like UpCounsel serves primarily as a marketplace due to upending the current law firm system which other comments have mentioned as a big barrier to technology adoption in the space. Interesting space nonetheless!
The oil and gas industry is ripe with potential start ups. Here are a few that come to mind:
1. A better system for automation and measurement. Current solutions aren't ideal when it comes to setting up new systems as well as updating and maintaining existing systems. We build several million dollar facilities a month and each one has automation and measurement equipment that has to be individually set up and programmed. Each technician does things a slightly different way, and the end result is a different set of automation and measurement logic at each facility.
2) Fiber optic DATS (distributed acoustic and temperature sensing) data handling and interpretation. This is a fairly new type of technology in which a fiber optic line is installed in the wellbore. The fiber optic line basically acts as a 15,000' strand of thermometers and microphones placed every 3'. The data from one installation is on the order of terabytes per hour. Oil and gas service companies that offer this service don't know how to handle this amount of data. The problem could probably be solved with S3 or something.
3) Drilling optimization. Create a software suite that utilizes ML/AI to help drilling engineers figure out the best way to drill a well is. It's a perfect ML/AI application. Lots and lots of training data available, easily defined input and output parameters, etc. Drilling engineering is full of hard, non-linear problems and humans are just really bad at it. The only way to be good at it is to drill lots and lots of wells and then listen to your gut.
Any more insights about applying drones to 1? I work on deep learning for aierial imagery startup (tensorflight.com). It seems like we could structure some information from drone images.
If you are interested in helping us understand the problem and potentially solve it together contact me at kozikow@tensorflight.com
I guess I'll explain how the business side of drilling a well works. An operator (think Chevron) will decide where and when the well is drilled. They will then hire a drilling company (say H&P) that owns and operates rigs to drill the well. Even though the drilling company operates the rig, they basically drill it however the operator ask them too.
So to answer your question, any operator requires these services, though most dont know it. A company called Pason is the leading company in the drilling data industry. Their bread and butter is just data measurement and streaming, though they recently have entered the analytics space. Their technology seems pretty promising.
All of our data is proprietary unfortunately. This leads to another start up idea: a data consortium company for this type of work. I don't think we would mind giving the data up if there was a legitimate way to do so and if there was some benefit for us (I.e. advancing the rate of progress in this field).
Interesting. I'm actually part of a agricultural genomics data consortium with similar concept (companies contribute $$ and data in exchange for licensing rights to research results).
@athollywood Are you available for offline discussions? Maybe you could put some contact info in the public section of your profile. Feel free to email me: xenon@mailworks.org
Every time I change jobs as an H1-B employee, I've to fill in the same ridiculous data with every law firms weird interface. I wish the US Digital Services would focus on streamlining forms and having auto-import from all the data they already have about me (e.g. automatically translate I-94 records to how much time I actually spent in the US, infer my past I-797 records automatically, have a one time education related upload since that obviously never changes). I realize there are certain valid reasons the agencies don't share data, but I find that hard to believe in an era of infinite surveillance, they can't use the surveilled data to at least make my life easier. I can see how the immigration law industry would never allow this, but I can hope.
The green card process is another minefield.
Also for Schengen countries, I've to apply for a visa every time I travel, and they make me list every time I visited the Schengen zone in the past 5 years, fill out the same application form across different countries, and get the same paystubs and letters from employers. Even a tool that could just machine read all the documentation a particular country requires for a specific visa, and just goes and pulls everything that can be pulled (bank statements, pay stubs, fill in travel dates based on the flight ticket emails in my inbox, hotel reservations and so on.) Just make it convenient for me to travel :)
Unfortunately any assumptions built into an immigration business are likely to be upended soon. I'd wait at least a year before trying to solve this problem because the regulatory environment could break your resulting startup.
SimpleLegal is not working on immigration. You may be thinking of Simplecitizen. Teleborder also tried. Along with many other non-YC companies. It's not a easy problem.
Well, the perseverant ones are having their time spent on bureaucracy, and living in a constant state of foreboding, decreasing the economic value they can add to the country :)
As a structural engineer, I see a good opportunity to make reinforced concrete design software available in a SaaS format. The competition is outdated, clunky, requires local installation and messing about with licenses. Design firms are paying $1000-$3000/year per user/seat for what amounts to a pretty basic app.
Unfortunately, there are very few people that understand both computer science and structural engineering.
I suspect much engineering software is ripe for innovation. My wife is a Water Resources Engineer- a specialized form of Civil that focuses on "when it rains, where the hell is all this water going to go?".
The software for that kind of modeling is apparently pretty basic, pretty expensive, buggy, etc.
A friend of mine was an environmental consultant, and went to startup weekend. They successfully made a SaaS app that would spit out an environmental report in minutes instead of days of manual entry. Just a really good example of small applications that have a huge benefit in old industries http://www.enterratech.com/reports/1/
99% Invisible's America's Last Top Model. The Mississippi River Basin model was shut down because the computer models were cheaper and "good enough." They still use physical models today for other projects, albeit on a much smaller scale.
Fluid simulation is a very difficult problem to simulate, structures are a lot simpler.
I think that the models are quite advanced and where the money goes.
It does not mean that you just can't take a couple of web developers and make it usable, but as the market is small it might pass the price point with a supersonic bang...
My brother in law would agree with this, he is building an open source project for collecting all of the analysis that civil and structural engineers have to do on every project into a single repository where people can share and augment the calculations. (think if it as github for civil and structural engineers)
I would love to see that as well. I have a project with similar goals (posted above). I would like for users to be able to simply call built in functions to do the normal tedious stuff like stress in rods, beams, etc.
I have a friend who works in construction glass. (It's NYC, that's a big market). They ship millions of dollars of complex, custom molded glass every year. Everything's kept track of by emailing excel spreadsheets.
That's disgusting. I feel filthy. I replaced a system in my first client school that also used excel attachments for tracking student data, exams results, attendance, assignments, etc. I could physically feel better once I saw my own implementation replacing email attachment system. Since then email-excel has been a sensitive topic for me.
I've recently found a department for the company I'm working for needs help as they have literally reached the limits of Excel.
They have a sheet with around 30 rows and 150 columns, and they have 100 of these sheets (in a single Excel file). Some parts use formulas, but usually when somebody needs to change something they need to go through every single sheet. The issue is now when they try to add new data Excel won't let them.
I don't even want to know how they share the file or do backups.
I work in the healthcare industry. Basically we ARE the industry nowadays, and we use excel and word to keep everything "organized". There are some half-ass designed software,websites, and databases that are used as well, but it's amazing how a multi-billion dollar company can rely on this level of technology. I think they get these bids to run state government programs, and have absolutely no plan in place. And for some reason instead of just automating or updating things, the company just throws bodies at the problems and makes everything "production" based. I'm sure a lot of places are similar, but this is a white collar factory on such a massive scale, it literally sickens me. There are so many channels that approval for changes has to go through, that by the time some small minor change is implemented its already way too late, too distorted by having so many hands touch the problem, and too outdated.
Same with the electronics manufacturing industry, where inventories and BOMs are done in Excel. It is a pain in the ass, time consuming and error-prone. But that's what managers want.
A fellow structural engineer here. I think that there's little room for innovation in the "calc" area. The cost of doing calcs is a small fraction of the overall budget of a structural project. Modeling/drawings is where it's at. The analysis/design toolset of a structural engineer (the FEA/design programs) hasn't changed in essence since the 80's outside of drafting/modeling, and for good reason: the marginal cost of an engineer perfecting their analysis exceeds marginal revenue. There's a sweet spot where an experienced structural engineer knows to stop refining their calcs, with the rest of the effort is spent on detailing, which sets apart good structural design from mediocre.
Where I would invest (if I were Autodesk or their competitor) is in releasing CAD tools for free in exchange for a consent to use the designs/details internally for ML purposes. Would love to contribute if anyone is working on such a product.
I might be working on something that you would be interested in. MVP is at www.cadwolf.com. I am a structural engineer with an MS from UT Austin. Website in written with php using Laravel and angular framework for JS.
The plan is to link CAD to the mathematics and then link the finite element to this as well. The system would also function as a sort of github for engineering where users can find and use functions to do most standard analysis. Email is in my profile if anyone is interested in talking.
I checked out cadwolf and as an engineer myself I find it very interesting. I am curious to understand a few points:
- Why do you center everything around documents? Is it more because people are used to it, or do you believe that they are best fit for design-tasks?
- I saw that to update multiple documents after a requirement changes, you need to open them one by one, in the order of their dependencies. Have you tested that this is still a viable approach, once you have thousands of dependencies and multiple users in a complex design?
I really like the equations and how you only allow to make formally correct equations (including units). Anxious to see how this develops.
(full disclosure: I am co-founder of a Software which tries to achieve the same aims using different concepts: www.valispace.com)
What I call "Document" are not files in the sense of a word file. They look like text files because that is what I thought engineers would be comfortable dealing with. However, they function as programs. Documents can be used as programs within other documents as well. Documents fill both the need to solve the calculations and to document them in one place. It eliminates the need to update documentation, have multiple platforms, etc.
There are places for users to upload and store data as well - datasets.
As of now, the code solves equations in javascript within the browser. This is why documents have to be opened when a requirement changes - because I have no server side of the code to solve them without the browser. It isn't a long term solution, merely a step in the building on the platform. My next step is to add a server side code that is capable of solver more complex and larger equations on the server. When that is done, changes to requirements will update documents without the need to open them manually. I plan on using python and there are several large libraries available.
This will allow me to link documents to CAD. When the math changes, the CAD will change as well. Once that is done, I will add a finite element meshing and solution system to create an engineering platform that essentially does everything.
I like your site. It's nice to see other people addressing these problems. I am also an aerospace guy. I worked on the shuttle for a while and then designed some components for the Orion. Shoot me an email if you want to talk more.
Tekla Structures has dedicated reinforcement concrete design features. The licence cost is probably closer to 10k per seat.
You need computational geometry, computer graphics, and structural engineering expert level domain knowledge to implement anything. You need to create traditional 2D machine/construction design drawings from the 3D models. Then you need to sell it to corporations, whose work, most of all, must be dependable and free of guess work.
You need to know what sort of geometries you can use to model the reinforcements. Then you need to know how to design the system so it can handle very large amounts of geometry.
The worst of all is you need to deal with god awful industry standard formats- DWG, DGN, IFC, Step/Iges and so on. Maybe DWG import and export first.
To have any real chance you need a guy or two who are good with numerical code, someone who is familiar with e.g. Game engines, soemone who knows computer graphics, a structural engineer to tell how he does his job and what the thousand inconsistencies in the field are (this is not a trivial domain like housing or transport), a sales/marketing guy to connect and push the product.
And, like someone else estimated, the potential market is not gigantic - which is kinda funny because we all depend on reinforced concrete but don't need so many engineers for the design work...
> I was more leaning towards member design software, such as spColumn and S-Concrete.
The utility of your software tools will be very limited if you are restricting yourself to only member design instead of total structure solutions like ETABS. Why should engineer pay you at all if they can use spreadsheet for free to do what you do with your SaaS?
> No one I know is using the automated concrete design built into analysis programs like ETABS, Tekla, etc.
Not too sure about this because I know quite a lot of people who are using these tools. Any reason why the people you know don't use ETABS or Tekla?
> Why should engineer pay you at all if they can use spreadsheet for free to do what you do with your SaaS?
Why do businesses invest in new tech? Why pay for excel when I can use a pen and calculator? The answer is because it makes them more efficient. We have excel sheets to do the same thing, matlab code to do the same thing, and yet here we are paying for these member design tools because they are the most efficient for us. If you save an engineer even a couple of minutes for each element they are designing, you essentially pay for the software.
>Any reason why the people you know don't use ETABS or Tekla?
We do use ETABS extensively for analysis. We don't use it for design. It is foolhardy to trust the automated RC design in these software. That seemed to be the standard of practice around here, but perhaps it is different in other areas of the world.
> It is foolhardy to trust the automated RC design in these software
Do you mind if I ask why? I'm working on a sort of general approach toward designing trustworthy engineering software, and I'm trying to collect as many reasons as possible for "can't trust the software".
Since you can already do the analysis (like ETABS), and you are planning to do individual member design, why put the two and two together and do an automated RC analysis+design software? There is no reason to distrust an automated software anymore than separate analysis+design software.
There absolutely is a reason: seismic design. We end up doing a lot of data manipulation between the FEA stage and member design stage.
Its not a distrust so much as a fundamental flaw. For simple gravity design it works fine, but even then we are using spColumn because its just quicker for us.
Care to explain why you have to manually do lots of data manipulation between FEA and member design? Why not write a software that can automate these whatever data manipulation? Seems to me that an All-In-One software should have no problem doing analysis, the-whatever-data-manipulation, and the member design.
In the US there are about 281,400 civil engineers [1]. I couldn't find more detailed information on structural engineers.
-Assume about 10% are practicing structural engineers who need to design concrete structures = 28140.
-Assume a company wants 1 license for every 2 engineers = 14070. (I base this off the fact that my company has 6 licenses for 12 engineers, but we may be higher than average)
-Assume we could get 10% market share = 1407 subscribers.
-Assume $1000/subscriber/year = $1,407,000 from the US market
I think this is an overoptimistic view of the size of the market. The field is highly fragmented. A large fraction of structural engineers are contractors or work for smaller firms (think wood design, rather than concrete/steel) and wouldn't be target customers for section analysis and concrete detailing software. An absolute majority of those that work for the larger firms don't require anything more than Excel spreadsheets. From my experience, a typical structural engineer is a rather savvy computer user, often times writing Excel macros or AutoLISP scripts to automate their tasks.
A large not-going-name-it software package for modeling steel and concrete structures creates alone over 100M a year. It hardly dominates the market so depending on how the market is segmented a good estimate is probably any number between 1 and 10 times this.
Single seat licences are not the only revenue model. Once a product gains traction consulting, training and providing VIP helpdesk and bugfixing services factor in as well.
I am a fellow engineer (Satellites in my case) and we have been fed up with engineering-tools in general (specially systems engineering), which seems to only consist of Excel-Spreadsheets and document-management systems. Even in the space industry there has been practically no innovation since the 60's more than digitalization of documents.
We are working since 1.5 years with some engineers on a software to solve this: www.valispace.com
I would be curious to hear from you whether what we are building with a focus on the space-industry also applies to structural engineering.
Out of curiosity, what do you think of what flux.io is doing? If a SaaS structural engineering application could be integrated with that, would it address your needs?
(For what it's worth, I'm doing something similar in the transport planning space. And yes, bridging the gap between that and modern CS is a substantial piece of work.)
I am working on doing this. The project is called cadwolf. The MVP is up now and I will coming out with a full version in January. If anyone has comments, love to hear them. If you are interested in collaborating, let know too.
A ticketing system that doesn't sux (I like RequestTracker, but it shows its age). Top players are ridiculously overpriced.
My management style is like this: every task/request is numbered, placed in a queue and assigned to a professional.
What I expect from my ticketing system:
- every manager should be able to assign tasks to someone and set the order they must be executed. He needs know what his team is doing and when they finish each task.
- every professional should know what to do and what are the priorities.
- everything is numbered and linked, all communication recorded.
Everything should be well integrated with email (please, don't send me a notification email about an answer and an url, send me the f* answer). If I answer the email, everything goes into the system, I should be able to send commands to the system by email (for example, add a keyword in order to make it a comment instead of answering).
The problem here seems to be that users/customers insist on customizing any such app to death.
Personally, I think the optimal ticket system would have this data for each ticket:
* A unique, prefixed ticket # (JIRA gets this right)
* An assignee (like an email To:)
* A reporter (like an email From:)
* A one-line summary (like an email Subject:)
* A multi-line body (like an email body, but ideally with markdown)
* Attachments (like email attachments)
* History for edits of all of these (not like email!)
That's it! It really is basically email, but with a unique ID, and editable with history instead of immutable with replies, and a decent UI, perhaps RSS + notifications.
Unfortunately, everybody else seems to think that their ticketing system should embody their vaguely defined and ever-changing workflow, prioritization, approval, and release management system, so they want to be able to add any number of possible statuses, approvals, workflows and and all the rest. Once you add that, you end up with another JIRA or ClearQuest or BugZilla, and the cycle repeats itself.
This sounds very like Redmine [0]. It's ostensibly for project "issues" however it's extremely customizable and all of the above are included in the default config and not much more. It sounds like if you removed about 2 default fields it'd be perfect for the ticketing system you describe above. Plus RSS + Notifications + a solid API.
I'm not associated with them, but I have used them successfully for months at a time (better than most productivity software). The reason is it is well integrated and similar to email.
Asana is pretty bad. It takes forever to load, which sucks when people send you URLs to tasks/projects and you have to open them individually and wait almost 10 seconds for each to open.
Asana's start up time is just ridiculously slow. Probably the slowest webapp I've ever used. Also, you can't assign more than one person to a ticket, which is a pretty big limitation
JIRA also doesn't let you assign more than one person to a ticket. Could you expand on why this is a problem? I'm not familiarized with this problem space so I'm just curious.
The recent GitHub updates let you assign multiple people to reviews and such, but I find it's usually better to tag everyone you want to look at something. I don't think assigning something will send a notification.
In a nutshell, I argue that the problem with most ticket systems is that they do not constrain the domain enough, so they wind up having similar problems to email (sifting through a chronologically-ordered pile of text rather than structured, semantically-ordered information).
Your comments make me think the crux of the problem is that people want tickets to be like email and use email to manage them. I'm not sure you can ever overcome the "chronological pile-up" problem if you allow email as a user interface to ticketing.
The simplest solution to the 'chronological pile-up problem' (nice name BTW) is a Wiki model, where replies are appended by default but the entire content can be edited if necessary. (C2 demonstrates this quite well.) For simple problems, conversations behave exactly the way they used to, but when it starts getting complex someone can go in and rearrange the conversation into a more logical form. This actually maps quite well to email: by default replies are appended to the bottom, but they can also be inserted inline (some mailing list etiquettes even demand this) or indeed the entire conversation can be rewritten. You'd probably want some sort of merge algorithm in case someone replies to an older email.
In fact, my usual approach to dealing with tickets/issues/emails which start to develop this problem is to make my own private copy of the thread and edit it in precisely this manner, though I'm the only one this benefits since it doesn't get sent back upstream.
I also have an idiosyncratic way of organizing this stuff, which is basically to use Emacs + mu4e to search my mail, and if I need to create order, write a new document from scratch. I have a coworker who does what you do, now that you mention it—he will take a series of email, dump it into Word and edit it until it is a useful document of some sort.
I still think there is something here though. Stack Overflow replaced message boards, which were basically HTML versions of mailing lists, and part of that was identifying the semantics of question, answer and comment and defining new operators and new expectations for them.
A wiki is a good approach but because it's totally free-form, the user gets stuck doing the work of keeping things hygienic.
JIRA allows you to edit all the properties of a ticket whenever, but it generates such a huge cloud of email notifications in the process, it kind of disincentivises you from using it. And nobody is in the habit of rereading the page to see what is different since last time.
> Your comments make me think the crux of the problem is that people want tickets to be like email and use email to manage them. I'm not sure you can ever overcome the "chronological pile-up" problem if you allow email as a user interface to ticketing.
I agree that's partly it, but that seems ok when you're in the thick of discussing a problem/fix. If you're doing a code review or something after a fix has been pushed, you actually want certain messages to stand out to describe resolutions and whatnot.
So like gmail where you can star/mark certain replies as important and those messages would show up at top-level in the ticket, where all other messages are collapsed.
We're in our first month of releasing Zammad [1] an open source Zendesk alternative with pretty neat features. You can check out some screenshots or a free 30 day trial oft our hosted solution on our commercial site [2]. I really like your feature ideas and will later create issues for them. Would be great if you add some too if you have more of them.
Full disclosure: I'm part of the maintainer staff.
Additional features: custom fields w/ user-chosen types: free text field, drop down list, etc.); time tracking (I spent n hours/minutes on this ticket); these should be searchable.
Major feature that allows me to work around any shortcomings in your office: API access to everything and/or database access (preferably direct read/write access, but even if it's just a downloadable .sql.gz it's a huge benefit).
I've been building support and dev/ops ticketing systems for years and I still haven't found a platform that suits all needs.
For my latest startup I went looking for a service desk tool. The key criteria was "feels like email". The moment any alternative required a user signup just to lodge a support request, I ruled it out.
I ended up choosing Groove. I don't recommended it. All ticketing systems suck, this one just sucked the least for my support desk. Groove doesn't extend to other ticket types, and it's nowhere near as flexible or extensible as JIRA, and the mobile experience is horrible. But it does "feels like email" for my customers better than every alternative you care to mention.
> every manager should be able to assign tasks to someone and set the order they must be executed. He needs know what his team is doing and when they finish each task.
That sounds like unnecessary micromanaging. You couldn't possibly have enough detailed knowledge to know the proper order of tasks in all cases. Possibly even most cases.
I agree that communicating the priorities are important, but the boots on the ground have a much better understanding of what they're working with than you do.
we use Github issues + Zenhub for that (though Github has recently implemented a lot of Zenhub's features). Managers mainly use the 'boards' and 'milestones' views; devs use 'boards' and whatever else. Messages are included in emails and you can reply by email.
Semi-related... I work in wellness and healthcare.
I don't know about you, but I despise filling out the same forms over and over again when seeing new healthcare providers. I'd love to start a service modeled after granular smartphone permissions where
(a) I check in at a new office (scan a code, they scan my code, beacon, something like that)
(b) the office then requests x, y, and z information
(c) a push is sent to my phone where I can review the information and approve or disapprove some or all permissions
(d) a final step of either entering my pin at the office, using my thumbprint on my device, or something else.
The key components would be storing the data encrypted at rest, following HIPAA and then some, having a solid auth protocol (keys, jwts, etc).
I think adoption would be helped because the public are already used to permissions like these when installing apps.
The benefits are a lack of paper trail, no one is going to not shred my SSN, my most up to date data is now available, and instead of hosting N apps/databases, I'm storing 1 and can reduce my maintenance, customer support issues because one for all, all for one.
Too much inertia on the provider side for this to catch on and reach critical mass - many septuagenarian sole practitioners out there using paper diaries / files, and larger organisations with some monstrosity written in COBOL (or MUMPS?) that will never change to accommodate this.
I'd suggest something much more low-tech - a website where you can punch in all your details - insurance, allergies, medical history, etc, etc... and then you can print it out (or a subset of it, for different kinds of providers) or generate a PDF that they can copy & paste into their horrible legacy system (an improvement on retyping), or, for those truly at the cutting edge - the kind of electronic transmission you speak of.
I'm probably bias because I've lived in two areas now where healthcare is one of a few, if not the, major industry in the area. They're always trying out new apps and services here.
I am on board with what you're saying; an escape hatch for non- or semi-adopters. Obviously, printing is a way to go, so maybe on the mobile app, the ability to check each piece of information required then export/email to your preferred destination.
It'd also be interesting to look to make money on conversion i/e replacing, or integrating with, the outdated monsters you're talking about.
Maybe we're not even talking about healthcare anymore, maybe just the ability to piece together PII (personally identifiable information) and deliver it to X.
>>>> on another note
This goes into a topic I've seen posts on recently, and something of interest to me, personal indexing; a better way to throw blobs against the wall and have it indexed for me, leading to a personal Google. I mean, that's already coming, really, between Facebook and Google (especially Google Photos) but currently I see nothing about piecing together information I'd like to share on a professional level.
Hmm, Google Drive does a reasonable job of that. It indexes everything (including OCR for images + PDFs), has decent search, and has per-folder permissioning and sharing.
It's actually a pretty good solution for ad-hoc "working together" with someone (a lawyer / architect / whatever) on a project, where you have lots of files you need to share and refer to during the project.
Maybe the problem is we're trying to get the wrong people to pay. Since the pain point is with patients, fix it for them and make them pay. Gets around the industry inertia.
Sell the service to the patients for some smallish fee ($5 per month) and then provide the integrations into the various provider systems for free.
Later on you could scale it up to be an add-on to employee benefits or the health plans.
Insurance card scanning and recognition is available. Costs $999.[1] This has apparently been around for years; there's Windows 98 support. It's been acquired by AcuFill [2]
They also offer identity document verification with facial recognition crosscheck. They want to use this to detect visa overstayers for immediate deportation.[3] That now looks like a market with potential.
The military is terrible for this. You are constantly filling out forms that amount to a half page of your basic information, followed by a couple of text fields that form is actually for.
I'm sure it's possible to hack together an AHK script[1], combined with Pulover's macro creator[2] to automate virtually anything repetitive on a Windows PC, or use Selenium to automate browser actions[3]. Of course then you run the risk of having to fall into the classic XKCD automation time sink[4].
France has national healthcare and everyone has a smart card with vital information, and all doctors have the hardware to read it and software to process it. Or at least they did 15 years ago when I was there.
We just built something like this for the health market. Users can auto-sigin to websites with one of their identities or set it to ask for each visit. Here is a lil demo I made that uses Craiglist as an example: http://tricorder.org/cl
When the user goes to your website, say wellness.com/newcustomer, there are javascript APIs to get at your standard data, that brings up a perm dialog and if the user accepts, the data is sent to the website. Send me an email (profile) if you want to talk biz, tho was planning to open source it. Auth is very solid, but its currently android only.
I am working on a patient-driven platform which brings together all key stakeholders with support of a few good partners, and I am currently conducting user interviews for it. I would love to talk with you more about your ideas. To make scheduling painless, I have a link in my LinkedIn profile summary:
I would love to hear from anyone else with big ideas relating to or are working on driving outcomes towards holistic wellness with patient-center healthcare, patient data collection/quantified self, and patient-powered research networks. In the bigger picture, I am passionate about making the world a better place through innovation and working on what really matters for humanity.
Here's an observation. HIPAA applies to health care providers, insurance companies, and other entities like that. HIPAA does NOT apply to me when I am in possession of my own personal health records. Not saying such an app should not be secure, but for me to hold my own records is regulatorily simpler than HIPAA.
I'm surprised none of the comments mentioned ZocDoc, which does most of this already. You can fill out forms once, schedule appointments and click to send your info to the office.
The more in-depth the examination and the more time you spend with the patient, the more they can charge. All those forms are "taking family history", etc. and it is free money since you have to do the work. Those are then scanned so they can be used later in an audit.
(Source: I also worked at a start-up that was trying to disrupt out patient medical systems. It's very hard and has lots of roadblocks. btw, of the top 50 EMRs in the US, only 3 have APIs and these are mostly to pull data, not push it back in).
All the ballroom dance competitions use this old, disliked software to organize and run the events. The guy who wrote it isn't interested in making improvements, (and it can certainly use improvements) and is happy living off the income from people's per event usage rights. I am sure if something modern and regularly updated came out, it would get a lot of uptake. Thing is, the portion of it that runs during the event needs to be able to run offline since venues don't always have reliable internet, and that also means you would be going to at least the first few events for support.. And your tests better be good, since time is of the essence if some does go wrong mid event. I thought about it, and decided I was not interested in dealing with all that when my job pays pretty well. Still, it's a real opportunity.
"So what happens when the Douglasses are no longer around? We have every reason to believe that we'll be around for a good long time, but we wanted a plan to provide for our loyal customers just in case we aren't so lucky. So we made one.
Here is how the plan works. Immediately upon learning of our deaths the executors of Dick's estate (his two highly computer literate kids) will post two files on our www.compmngr.com web site and will send out a broadcast email advising our customers how to download the files. The first file is a small standalone computer program called RegisterEvent.exe, which allows you to create your own registration files. So you won't have to register with Douglass Associates and you won't have to pay a registration fee. You can read more about RegisterEvent and how to use it below. The second file is a ZIP file containing all the source code for COMPMNGR and its supporting programs. This file will only be of interest to those few users who want to continue COMPMNGR development and who either know C++ programming or or willing to hire a C++ programmer."
> I am sure if something modern and regularly updated came out, it would get a lot of uptake.
Would it? See, here's a dirty little secret: people can't deal with change.
Any change made to the software means people have to learn something new. And that results in tech support.
I once had a very nice chat with the CEO of a CNC company and asked him why certain features weren't implemented since his hardware was clearly capable of it. He was quite blunt that a single new feature added about 30% to his tech support budget for almost 3 years, and his tech support budget was almost 1/3 of his annual budget.
So, he simply will not add a feature until it results in an expected 500K in increased revenue or he has to fend off a competitor.
> Still, it's a real opportunity.
Is it? Actually?
And do you know ballroom competitions well enough to get all the corner cases correct? The Douglasses have been to a LOT of competitions and probably wrote this because they got tired of the grief caused by badly run competitions.
How many ballroom competitions exist (<1000)? How much are they willing to pay (<$1000)? And how much will tech support cost?
So, this is less that $1,000,000 per year in revenue MAX. And, this software is already in place with people know how to use it.
Your revenue will likely be $10-20K per year for a long while unless you completely displace this. And they can always drop their prices and block you out if they feel like it. And your tech support costs will be quite high.
I suspect the Douglasses made this same calculation and that's why they aren't improving it. It's just not worth the money.
This is an idea I also considered, given that in Europe the software is similarly awful, but at least the guy (yes, the one guy) here is still doing some improvements.
If one would like to do something in this space I'd go with a solution where you can rent the equipment, get it shipped to you in boxes and ship it back later. For larger organisers you could arrange for leasing options or an on-premise installation that has an auto-update.
The advantage would be to provide offline capabilities including a controlled network environment for adjudicators.
It looks like you have to pay "to have more than 250 entries [I don't know what's typical], to sign up for web page creation options, and to receive technical support".
And here they ask for credit card details over http...
Typical is often larger than 250, and not having things like heat lists up online before the event would make people think you are not running things seriously, whether your event is over 250 or not. I hadn't looked at this web site, and haven't personally used the product. If this website is at all indicative of the user friendliness and modernity of the product itself, I can see why the event organizers complain.
Exactly. The whole point is to get access to people with insights about an industry and who can point to the problems and why they haven't been solved and connect them with people who might have solutions to those problems.
Just imagine how much valuable knowledge and insights get lost every time someone retires.
Well, I'm sure you would still get different people commenting due to one set being home on a Friday night one month and then a different set the next. I would prefer monthly, personally.
I got so exited about the thread and it made me realise something very interesting which I turned into an essay. I call it looking for hidden problems underneat obvious solutions.
Rapid generation of high quality 3D models of existing objects. Process should be independent of object size eg. a coke can should use the same process as a car and process time should scale with object size.
Think somewhere on the order of 10,000 models per day throughput.
There's $BNs waiting for you. It's ridiculously hard.
This might already exist depending on your exact requirements and it's a fairly common technology in the world of metrology. I regularly work with manufacturers to reverse engineer and/or measure molds, jigs, and fixtures for which there are no drawings available. A ROMER or FARO arm with a laser scanning head outputting its point cloud to a software like the PolyWorks suite can generate an incredibly accurate CAD model of incredibly large parts in a very short amount of time (an hour or two at worst if the mesh needs a lot of cleanup).
I assume that process would be easy to speed up if the requirement for absolute accuracy was removed. The 8' ROMER arm we use is accurate to ~!2 microns over its entire volume which is absolutely overkill for something intended to produce models for visual arts applications. A quick and dirty approach to generating the mesh might increase the inaccuracy by several orders of magnitude but when coke can has dimensional tolerances to the tune of tenths of a millimeter, the quick and dirty mesh will still be representative of the end product.
Unfortunately not. FARO and other structured light systems don't export texture and are generally too precise (micron) in current form. So they take too much post processing by default.
Who would be the primary customers? The entire 3D capturing market is currently several $B per year, including services. Where would be the customers that aren't getting served today that would double this market?
Well it would siphon everything away from the existing 3D capture industry and open it up to smaller groups and those that aren't savvy on it yet. The consumer space generally isn't doing this so anyone that sells anything would get on board at a low enough price point and simplicity.
Not really. I know some folks on the RS team and they don't really develop around applications, they are more focused on miniaturizing and making RS more available and lower power.
That said, some people have tried to use RS for this problem, but from what I've seen end up just using Kinects.
My colleagues at Creaform have something that works pretty well. Their latest handheld scanner can generate a wire mesh on the fly with 0.030mm resolution.
I'm sure I speak for many of my fellow fans of physics when I say that technology capable 8.6 seconds per scanning a complex smaller object would have applications beyond just making 3D models.
Product development engineer here. In the early stages of a project it can be useful to have CAD models of a competitor's product when analyzing how to improve upon them. Recently we had an intern reverse engineer a competitor's product, and we've used some of these CAD models as the basis for our new designs.
There are multiple but mine specifically is AR. It's valuable also for VR, 3D space planning for Designers/Architects/Engineers, Assets for Game Dev, Objects for modeling and simulation, Training Deep Vision nets and on and on...
Anyone who does work with 3d content: visual effects, video games, vr, ar, etc. Being able to quickly build your scenes from a huge library or accurate models would be amazing and would save businesses lots of money.
What kind of structural integrity do you have in mind? Something with the density of industrial packing foam? I've seen set pieces constructed / carved from such material and it can be painted quite well. Putting aside the environmental / toxicity concerns for a moment regarding the type of material to be used, I'm genuinely curious how "rigid" such pieces might need to be.
Oh, so you mean like a big box that could fit XYZ items inside it and capture something like 10,000 per day? I mean, to me it's kind of hard to believe nobody's tried making a "conveyor belt" like process inside a closed system (a shipping container?) with the right optics and resolution to pull it off. Fidelity plus speed plus software consistency. Considering what I saw the gaming industry doing with static models about 10 years ago I kind of thought it'd be a lot further along now, but I guess not. Sounds like a good project for a few Rensselaer Polytechnic Institute grads that otherwise would've been destined for Kodak.
Oh, so you mean like a big box that could fit XYZ items inside it and capture something like 10,000 per day?
Maybe but I actually think that's the wrong approach.
I mean, to me it's kind of hard to believe nobody's tried making a "conveyor belt" like process inside a closed system
Yea they have - kinda. None of it works well or fast enough though. We put up a patent for one a year ago before I thought there was a better way to do it. The manpower required to move items onto/off of a line is a big part of the problem.
Well yeah the human element is exactly what I'd want to eliminate as much as possible; I'm thinking of it more along the lines of what I've seen on How It's Made: Dream Cars in the sense that to get the various layers you're going to want - basic dimensions, surface features, coloration, reflective properties - aren't going to happen in one quick grab I don't think, and I get the feeling the process would work best in "absolute darkness" and isolated as much as possible for vibration.
Taking that 10k number - assuming disparate types of items that might be part of a series like "Bathroom" (toothbrush, hair brush, toilet brush, plunger) - in 24 hours that means cycling each item through in about 8 seconds. The only way I remotely see that possible is essentially having a robot hand pick up the item at the entry point, hold it for the capture sequence (perhaps have a custom-designed 'mount' that can allow for true 360 via a couple positions), and then drop it out the other side.
It's the scale part I'm wondering about, re: one size machine fits all doesn't seem to make sense. One machine for items under a certain dimension (e.g. "hand held") then another for items where the machine has to essentially have super-powers to pick up and rotate objects to complete the imaging process (e.g. a couch, a dresser, a motorcycle, etc). I think trying too hard to accommodate outliers ends up tainting the balance of operations a little? Just thinking out loud, really cool puzzle.
Yes there have been a million attempts since the early days of 3D. Most of them are photogrammetry or structured light setups of some kind, that aren't fast enough and don't scale for sizing. Part of it is logistics of getting objects through a scanner, with the accuracy being poor or muddy at best.
IMO it should be done with a mixture image segmentation and procedural generation.
There isn't a combination of laser scanning and/or structured light projection scanning that can accomplish this? Or is it a speed/quality control issue of the output?
Triangulation laser scanning is about the closest you get in terms of accuracy. It can work on virtually any surface, including specularly reflective ones (I've worked on bespoke systems for the steel industry). It's accurate down to microns, but the usual problem is the field of view sucks - either you go further away and sacrifice resolution or you go really close and accept that you need to move the scanner (or object) around a lot. For small things, it's fine. You put your doodad on a turntable. For cars, forget it.
Stereo structured light is great, but doesn't work on specularly reflective objects. You've seen those amazing depth maps from the guys at Middlebury? Wonder how they get perfect ground truth on motorbike cowls that are essentially mirrors? Well they have to spray paint them grey so that you can see the light. The next problem is that you're limited by the resolution of the projector (so I guess if you own a cinema, yay!) and the cameras. Then you have to do all the inter-image code matching which sounds trivial in the papers, but in practice a lot harder (and since you don't get codes at all pixels you need to interpolate, etc, etc).
There are handheld scanners like the Creaform which work pretty well on small things, but I don't know what the accuracy is like.
The ultimate system would probably be a high-resolution, high-accuracy, scanned LIDAR system. Then you lose the problems with scanning ranges/depth of field, but you accept massively higher cost and possibly a much longer scan time for accurate systems.
Beyond just the 3D: Do you think that capturing the full appearance properties (BRDF etc.) of the object would be useful? This would allow users to very accurately render objects.
See other responses for the markets it would serve. Use 3D modeling by hand pricing as a comp (Low end $10/model, average in the $50-100/model range, sky's the limit for super HQ stuff).
Not sure what kind of datasets you're looking for. You'll see actual products to test with.
That makes sense. I think there's opportunity for generative ML to eventually help here. An open dataset of (images, description) -> 3d model would go a long way. Check out this paper on using GANs to generate voxel-based models: http://3dgan.csail.mit.edu/
I've been studying and working with GANs for about a year now. They are still very exciting, and I'd love to try to expand my codebase to new types of data.
Additionally, there are some recent techniques that haven't been tried with voxel-based renderings.
Perhaps there is another algorithm that can help go from voxel -> polygons as well.
I think with the right tech, time, and execution this could be a matter of:
Can you clarify what you mean by Procedural Generator? Isn't a generative model already a procedural generator? Its just that a model generated in case of the referred paper is voxel based. Did you mean, generate parameters of a pre-specified model e.g https://graphics.ethz.ch/~edibra/Publications/HS-Nets%20-%20... , although this paper is just learning a regression to the human body model (not using GANs).
Curious to know more about your train of thought. I am working as a researcher in the domain and thinking of experimenting with GANs for 3D model estimation using similar inputs as the one in the paper I referred to.
Content and patch distribution for video games: Data integrity, progressive downloads, file-level patching, compression, encryption, and platform/version branching.
It's quite mind-boggling; nobody is really doing it on an industry-scale level. Every video game developer has their own way, all of which have their own problems.
It is a very hard problem. Blizzard actually came up with a very good system, but it's not in a state where it can be commercialized or open sourced.
I actually think whoever comes up with a system which solves these problems in a clean and consistent way will be sitting on a little revolution for content distribution.
I have devoted 10 years to game content distribution, packing, compression etc. (Now not in gamedev anymore)
This is a very easy problem which usually solved by attaching fairly simple script which is aware of your file formats to any commercial installer system.
Some companies are even selling more or less standard solutions for that, but in reality from any given 1000 games 900 will have very different data formats and all have fairly good reasons to do so - using universal "patch systems" really creates more problems.
I think the "900 different data formats" problem is something that will go away as we move towards better tools which cover all the standard use cases.
Gamedev is riddled with really smart people that reinvent the wheel all the time because they found a way to micro-optimize this or that. They get to do this because until recently, there was no "good enough" solution for a wide range of games (or the "solution" was priced with enough zeroes to make bill gates cringe).
But you saw how popular Unity got, and how fast. That's the games industry in a nutshell: ripe for solutions that work for more than just one studio.
BTW Unity, with all its excellence, has really horrible data format for content and patch distribution, and had and still has huge problems with this. Perhaps the legacy of early overengineering and struggle to protect the games from easy reverse-engineering.
And compare, say, to simple incremental zips of Quake with alphabetic file loading order.. Total no-brainer to implement and use. (I have even seen zips with custom LZMA compression!)
So, if any, someone will have to solve a problem of artificially created obstacles, not a problem per se.
The path forward for games is roughly similar to where digital audio is now: Comprehensive workstation environments with an easing facade through plugins, presets, etc. The coarse elements of a rendering algorithm or a piece of game logic can be reduced to a processing graph, behavior tree, or other convenient abstractions. They can plug into each other by exposing both assets and processing as globally addressable data. Original coding for game logic will still be required for the foreseeable future, but most of the development problem is weighted towards getting assets in the game, and that can be abstracted.
This is done in bits and pieces across existing engines and third-party tools, but there's a lot of room to make it cheaper and easier.
I used to use irrlicht and Ogre. Both have the problem of only really doing graphics and to a certain extent input. In comparison, Unity and Unreal offer the whole package: graphics, asset pipeline, audio, networking, and physics.
Speaking from experience as I'm currently making the jump to Unity for my projects, the time savings of choosing one of the all-in-one engines instead of gluing together engines is really substantial.
Wharf is a protocol that enables incremental uploads and downloads to keep software up-to-date. It includes:
A diffing and patching algorithm, based on rsync
An open file format specification for patches and signature files, based on protobuf
A reference implementation in Go
A command-line tool with several commands
Butler is the commandline tool for generating patches (it can negotiate small diffs from the server without requiring a full local copy of the thing you're diffing against), uploading them and applying them back on the client.
It is used to power itch.io's Steam-like application, itch: http://itch.io/app, delivering multi-gigabyte game installs & updates.
Hey, amos here, main developer of wharf/butler, here's a quick technical summary so you don't have to do the digging yourself:
- File formats are streams of protobuf messages - efficient serialization, easy to parse from a bunch of programming languages. Most files (patches, signatures) are composed of an uncompressed header, and a brotli-compressed stream (in the reference implementation, compression format are pluggable) of other messages.
- The main diff method is based on rsync. It's slightly tuned, in that: it operates over the hashes of all files (which means rename tracking is seamless - the reference implementation detects that and handles it efficiently), and it takes into account partial blocks (at the end of files, smaller than the block size)
- The reference implementation is quite modular Go, which is nice for portability, and, like elisee mentioned, used in production at itch.io. We assume most things are streaming (so that, for example, you can apply a patch while downloading it, no temporary writes to disk needed), we actually use a virtual file system for all downloads and updates.
- The reference implementation contains support for block-based (4MB default) file delivery, which is useful for a verify/heal process (figure out which parts are missing/have been corrupted and correct them)
- The wharf repo contains the basis of a second diff method, based on rsync - for a secondary patch optimization step. The bsdiff algorithm is well-commented with references to the original paper, and there's an opt-in parallel bsdiff codepath (as in multi-core suffix sorting, not just bsdiff operating on chunks)
- A few other companies (including well-known gaming actors) have started reaching out / using parts of wharf for their own usage, I'll happily name names as soon as it's all become more public :)
It's actually exactly what you described - the documentation is very sparse on it because it's an internal thing (I'm guessing you found the CASC documentation, not the NGDP one). If you're interested, shoot me an email and I can send you some more details; but it'd simply be for intellectual curiosity, as I said it's an internal protocol.
It's exactly what you want, but this service isn't poplar. It doesn't work because everyone is on Steam. Network effects. It's like when app.net tried to replace twitter. You can get mad at users, but users are using Steam & Origin and Battle.net and they don't care.
For themselves! It's also not a great system, lots of legacy. More to the point though, it's not a commercial system. Steam behaves as a distribution platform and licenses the publishing rather than the distribution.
Which is not at all interesting for self-published games (be it indies who want to avoid steam, big publishers with their own systems etc).
Steam supports any game from any publisher, including self-published games. Steam is exactly what you get when you try to solve this problem, because none of the big companies want to be involved, since they see it as competition. Blizzard and EA are the only two big companies I know of that are not on Steam, and they both have direct competitors to it that are vendor-locked.
I used to work in the keynote speaking industry as an agent.
This industry is horribly inefficient and intentionally so. It's mostly east-coast based - NY/DC but functions similarly to the LA entertainment industry.
The main problem is this:
You are a meeting planner (not your job title, you are actually a marketing person or executive assistant) and your boss just tasked you with find a speaker for your next company meeting.
What do you do? You can either:
1) Find a speaker yourself by searching Google and sifting through the mess of results
2) Call a speaker's bureau and get raked over the coals on price
3) ???
Ideally, there would be a marketplace for speakers. Where you would be able to search for talent that fit your criteria (available these dates, for this price, talks about these things, is well-regarded, etc.) and book them online.
There are marketplaces for speakers: espeakers.com, orate.me, bigspeak.com, kepplerspeakers.com, speakermatch.com, eaglestalent.com, celebrityspeakersbureau.com, among others. The speaker profiles include prices, areas of expertise, and a way to inquire and book. What they don't show, that you're asking for, is speaker availability and popularity. The problems I see with disclosing the schedules of speakers: 1) celebrities have real privacy concerns, 2) talent does not want their real demand every day of the year exposed to the public because it can hurt the mystery of their appeal, and 3) disparate calendaring methods maintained by each speaker mean that no web site can be in sync with all of its talent thus instant online booking is hard to do. As for the popularity requirement you asked for ("well-regarded") this is difficult to define because assessments are nonstandard; so what you get in all speaker marketplaces is endorsements and accolades the speaker cites themselves in their own profile -- which will always be self-recommending.
No, none of the sites you mentioned solve the problem I posed.
Bigspeak, Keppler, Eagles Talent, and Celebrity are bureaus and function in a traditional way. Espeakers and SpeakerMatch are essentially speaker directories -- they make money from speaker's paying them a monthly fee or in a lead gen style.
Orate.me is new to me, I haven't seen it before. In a cursory look at their site, it looks like their speaker list is just NSA members.
Also the calendaring problem is easy to solve...
There's enough public data to develop a sentiment algorithm and solve the popularity problem.
When I worked at a bureau, we had a running joke... When a meeting planner asked, "How much is so-and-so speaker?" We'd reply, "What's your budget?" Coincidentally, that was how much the speaker was. Regardless of how much we knew the "rack rate" of that speaker was.
During the 2008 election, Rudy Giuliani was running as a Republican nominee. When his financial disclosures came out, he surprised everyone with how much he made on the speaking circuit. News orgs filed FOIA requests to see what universities and public institutions paid him to speak.
For some events, he'd speak in the same city to different groups. The price each group paid was wildly different (+/- 50k). The meeting planners hit the roof.
Speakers bureaus do not view meeting planners as their clients, the speakers are.
The entire Real-Estate industry is ripe for disruption. Exactly why are we paying 6% to "agents"? What are "agents" actually doing? 10+ years ago the need for "Real Estate Agents" was there as information was not readily accessible. Things like "what kind of area is this property in", "what are schools like" "what are prospects for this area 5 years down the road" and so on. All this information (and MUCH more) is currently readily available online. I have purchased and sold 4 homes in the last 6 years and agent I have worked with has made many tens of thousands of dollars and I can tell you with utmost precision that she hasn't spend more than 10 hours total. My wife and I scouted locations, went to open houses, did research online for things that are important to us and finally called the agent and said "we want to make an offer." Other than that the only other time we called is for her to let us into a home if there was no open house soon.
10+ years ago we had other "agents" that made a lot of money, e.g. "Travel Agent" in Travel&Leisure industry. Then internet came along and now we have 1,000's of websites that have replaced what Travel Agents used to do...
For Real-Estate I think MLS is just a start a small piece of the puzzle, someone very smart is going to disrupt the whole industry and make many billions of dollars.
The entire Real-Estate industry is ripe for disruption. Exactly why are we paying 6% to "agents"? What are "agents" actually doing? ...
I basically agree, but I will say that when we bought our house and the (FSBO) seller turned out to be a flake and/or an outright nutcase, the realtor we were working with earned every penny of his 3%. I won't go into what he did for us because some of it may not necessarily have been legal for someone who's not an attorney to perform. Let's just say it took a lot of hustle to hold the seller to his commitments and make the deal happen according to the contract... especially when some of the flakiness turned out to be our own fault, due to the (in)actions of our mortgage lender.
Most of the other realtors I've dealt with fit your description perfectly. Nowadays, my thinking is that they're like airline pilots -- most of the time you can take them for granted and even think about how to replace them with automated processes, but when things start to go wrong, you may find yourself appreciating their work more than you thought you would.
I think a lot of the value in real estate agents is a motivated party to make sure the deal doesn't fall apart. Buying a house is emotional for both sides and petty things can cause people to walk away.
The entire Real-Estate industry is ripe for disruption.
Exactly why are we paying 6% to "agents"?
https://www.redfin.com/ is already doing this. They charge half the normal fee if you're selling and they give about half of their fee to you after purchase if you're buying.
Redfin isn't disrupting the industry. They set out to, and ended up pretty much recreating the existing model and competing on price.
"We’re full-service, local agents who get to know you over coffee and on home tours, and we use online tools to make you smarter and faster. More than 10,000 customers buy or sell a home with us each year."
That's pretty much what any brokerage is. The only difference is they pay their people salaries. Any agent that can actually substantially sell and earn would strike out on their own and get their brokers license.
We are actually in the process of building out a platform that gives home sellers the power to sell their home with the support, guidance, and advice of an agent without the fees. Feedback is welcome.
In less that 30 seconds, you can begin the process of listing your home with no contract or fees and includes MLS syndication.
The app auto-dispatches photography, drop-ships signs and lock-boxes, and starts the documentation management process.
At anytime you can reach out to your agent via phone, chat, or email.
When offers come in, they are displayed in "easy to understand" language and you can review then with you agent/listing account manager anytime.
When closing time comes, we dispatch a notary to the home and you exchange everything at the home. Super easy. Super convenient.
How we make money:
We retain a small fee at closing. When you go to buy your next home, we rebate that fee back to you when you use one of our vetted affiliate agents.
It is a win win. We keep the small fee and the agent "rebate" is essentially the acquisition fee for the guaranteed buyer. Agents don't waste their time working leads that will most likely not close (which is part of the reason why big commissions exist) and home sellers save thousands of dollars.
We operate in Colorado right now using brute force but V1 of our app will be released in January of 2017. We will be servicing Washington, Colorado, and Minnesota as of Februaury.
Sure, but what you are paying 1/2 for? Imagine owning a home which sells for $1,000,000. Currently you pay roughly 6% - $60,000. You pick Redfin - they charge take the money and give you back 1/2 but you are still paying them $30,000!!
But if your house is sold for $100,000 you'd pay $6,000 or $3,000 to Redfin. The whole thing is just the biggest currently running scam that I can think of and someone will come along and totally disrupt this entire industry of scammers :-)
Exactly. Does it take ten times as much work for a real estate agent to sell a $1m house versus a $100k house? Obviously not. So what are we paying ten times as much for in the one case, and if there's no connection between effort and fee, what are we paying for, and why, even in the low end case? Totally a scam, totally ripe for disruption.
The level of service is pretty low too. Its basically handling setting up a few appointments to see houses, helping grease the skids for inspections, and setting up a closing with a settlement company/lawyer. Stuff that really doesn't take all that much expertise/knowledge.
I'm sure there are little things that go on behind the magic curtain, but definitely something that could be automated in a b2b fashion.
The worst part of it all - all that money is spent basically to protect the seller so they can wash their hands of the whole deal once the house is sold. There is very little to no buyer protection in the real estate world.
Sadly your commission on that million dollar house is actually $45k -- Redfin cuts their seller's agent commission in half (3% becomes 1.5%) but the other half goes to the buyer's agent, so you pay 4.5% rather than 6%.
As a Brit I don't get the agent thing for buying and selling houses in the U.S. As I understand it's basically required for the buyer and seller to have an agent.
Here you can sell a house privately, and the only fees you need to pay are the solicitor for conceyancing - the fees are around the same when buying or selling, between £500 - £2000. They will just do the paperwork, check there aren't any issues with the deeds, and do minor contract negotiations, and register the sale.
The purpose of an agent here is really just to have a shop where they can advertise your sale, take clients and guide them around a house (but TBH I'd rather just look myself without an agent as they don't add any value in my experience), and do a bit of management between the buyer and seller. For this they'll usually charge 0.5% - 3% (to the seller), but if you want to do it yourself you can just advertise on sites such as Zoopla.
Wife's a real estate agent so I have some insight.
Agents mainly do two things:
1. If selling they can put your house into MLS.
2. For buyers they can let you into a house.
Being able to do these things requires NAR membership, a state license, and other fees and cost real estate agents thousands per year. Also driving meetings clients at houses takes gas and time. Also your 6% commisiion is paying for all the other people that the agent spent gas and time on who did not buy a house.
Would you be willing to advertise in the news paper that your house is for sale and leave the doors unlocked so that random people could check it out?
True a buyer could be let into a home by sellers agent and even use the sellers agent to purchase the home. Agents love this! Its called getting both ends of the deal. It doubles the agents commission. Does not benefit buyer at though and actually is a bad idea.
Yeah, even with a buyers agent, the buyer doesn't get much. Both agents (if being paid out of the 6% commission) have a fiduciary responsibility to the seller.
About the only thing a buyer gets out of the whole deal is a title search to make sure there are no outstanding liens on the property - and they have to pay for that themselves!
Opendoor is already doing agentless open houses with access codes (smart locks) and cameras throughout the house where people can visit whenever they want without calling ahead, not that difficult
That seems to me to be fraught with all manner of peril. Suppose a burglar poses as a buyer under an assumed identity, and then robs the house while wearing a mask. What is the recourse?
The house unlocks to a tracked verified identity from a smart phone, Not to random people in masks, would be my guess. Also empty houses are hard to steal only vandalize
I guess I'm asking how the identity is verified. It's not hard to spoof an identity on a smart phone. And empty houses are full of copper wire and electrical appliances (and in this case, apparently, security cameras) which are commonly stolen from construction sites (and cost a lot more than a smart phone nowadays).
Apparently Opendoor literally buys (and fixes up) the house and then re-sells it, rather than simply arranging the transaction, so they're selling an empty house.
Buyers have to go through an identity verification stage to gain access to the homes, including, I believe, the taking of a credit card. Not impossible to spoof, but far more frustrating than merely downloading an app, clicking "I agree" and getting the unlock code to burgle.
Its worth what agents are willing to do it for. If you cant find a real estate agent willing to help you sell your house for 0.5% then it is not worth it.
The real estate industry is very heavily regulated and, worse, these regulations are extremely fragmented. Any disruption is going to have to deal with that.
Also, agents for basic residential purchases where the property is about as mundane as it gets provide very little value. However good agents will do proper diligence by inspecting the public records for the property. That part may be disruptable with software and an ambitious enough records maintenance effort. But good agents who do their jobs properly are well worth 6% (and, by the way, that 6% is frequently split two ways, so each agent gets 3%). Of course my bar for "good agent" is much higher than Joe Coldwell or Jane Keller-Williams. "Good" agents know local and state regulations very well and do diligence to inspect the public record and work closely with well qualified inspectors to get the best (lowest for buyer, highest for seller) price they can. Most agents, especially for "big box" brokers do basic humdrum contract paperwork and have a very basic approach to disclosure and negotiation and don't add much value. They're ripe for disruption.
Australian here: on my last house sale, I did a deal with the agent where if he got a price above a quite-high threshold that he would get 3% instead of his usual rate. This was seen as a remarkably generous offer, unheard of in the industry.
I've only once seen someone use a buying agent: husband and wife were both hospital doctors with three young active kids and they figured they valued their time enough that they would pay for a real estate agent to short-list properties for them.
I wonder what is so much more difficult about real estate in the USA that it requires so much more in the way of services.
Australian who lives in the US and is looking at buying a house. What's different? Almost everything, starting from the basics like there aren't auctions, you have to submit a secret bid that the seller will read and then tell you if it was accepted or not. Factors that can affect acceptance include what percentage of your purchase will be made with cash, whether you are willing to buy it without bothering to inspect the house first, and how many days it will take for you to close the deal if accepted - and of course, everything else like whether you are the kind of nice quiet couple they think their neighbours want to live next to, whether you have the same name as their brother-in-law, and whether you are the kind of person they want to imagine living in their house once they are gone.
> work closely with well qualified inspectors to get the best (lowest for buyer, highest for seller) price they can.
Kind of an aside; what's the motivation for buyer's agents to get good inspectors or the lowest price possible? They're getting a percentage and they're motivated to make sure the deal falls apart. Although, as a buyer it's not like I knew good inspectors, anyway. The best I could do is read their sample reports and go off of my own (sparse) knowledge of home construction.
Agreed. There are many potential advertising opportunities to boot.
When people buy a new home and move they typically need to get new utilities, health and car insurance, doctors, dentists, furniture, appliances, etc... They need to decide where they're going to shop for groceries, which restaurants they want to explore, what activities they're going to get into.
As someone who grew up in a family of real estate investors, landlords and brokers and has 14 years experience as an adult with it, I have to say, and I'm not picking on you because what you are espousing is a widespread belief, but you are wrong.
You are wrong about the amount of time she spent, and wrong about the industry being ripe for disruption. Spending 10 hours on four deals is not physically possible, or you have the luckiest, worst realtor in the history of the broker business.
Many have said the same thing about the industry, where are the results? Realtors are not still in business because of information asymmetry. Redfin has tried, and they have not succeeded. In fact they ended up moving towards the historical model. That tells you something.
I'll let you in on a secret, you can sell and buy any house you own or want yourself! It's called For Sale By Owner and many people do it. You can also negotiate on commission.
But be careful, because there are a lot of laws in place in RE, to protect buyers. It is heavily regulated. You could make a mistake and fail to disclose something that could end up costing you HUUUUGEEE.
What you get with a good realtor is a ton of service. Just the other day I drove 25 minutes each way to go make sure a homes thermostat was on and the house was warm when the temperature dropped. I was doing a favor for a realtor I know, that is selling a house for a family in another part of the country.
A good realtor will do things like that for you. And it's often a thankless job. The house I did that for might not even end up selling, the realtor may not even get a commission and it ends up being a lot of free work.
A good realtor knows attorneys, handy men, plumbers, electricians, landscapers and is a tough negotiator. A good realtor will market your property to a wider audience than would be easy for you to do yourself.
There are a lot of things that can go wrong and disrupt a sale, often times at the last minute. Those connections can save a deal. I've seen it countless times.
The reason the real estate broker business hasn't been disrupted is because it's a service business, it's not a product and its difficult to reduce it to something that scales and can be automated.
There is a large graveyard of startups that have tried.
I gave up my real estate license because of the sentiment you are expressing here, and the amount of competition that was willing to put up with it. It's a crap job in my opinion, it benefits the customers much more than the agents in all but the really hot markets.
"Spending 10 hours on four deals is not physically possible, or you have the luckiest, worst realtor in the history of the broker business."
In lots of cases there simple isn't anything to do. I saw the house, did my own research on it, went to see it, did more research, did more research and then called my Realtor and said "Make an offer." Is that worth $43,000? Why would she make $43,000 for few hours of work? I bought two houses this way, my first and last call to my Realtor (personal friend of mine) was "Here's our offer on the house, send it in." One offer was immediately accepted, the other one went back and forth 3 times over two days.
The simple fact that charging based on percentage of the sale is theft, there is no other way to put it. There is one other "entity" that does this and this is Government with Sales Taxes (also theft :-) ). Can you tell me one good reason why Real Estate Agents do not simply charge by the hour?
The TLDR on this piece is that since commission is a fraction of sale price, the primary incentive for agents is to just get a sale, and fast, rather than maximise sale price, as even tens of thousands extra adds only a little to their cut, so their time is better spent pushing the property through the sales pipeline and moving on to the next one as quickly as possible.
There's a study which draws similar conclusions:
> Our central finding is that, when listings are not tied to brokerage services, a seller's use of a broker reduces the selling price of the typical home by 5.9 to 7.7 percent, which indicates that agency costs exceed the advantages of brokers' knowledge and expertise by a wide margin.
Good luck. I used to work at a Startup that tried wrangling MLS data. There is the not-so-trivial problem of gathering all of the data together into one schema. But to offer services on MLS data (like a forward facing website) you need to have a physical office in the MLS region your providing. This problem seems to be more legal than anything.
For anyone wondering, this is not a technical problem. It is a political/business problem.
It's a solved problem to make a system that stores information about home listings and transactions. Other countries have done it. Yeah, houses are annoying because of all the weird combinations. But the thing is just a CRUD app.
This system still exists because of organizations that are based on information asymmetry. A small group of experienced people could make a really, really nice platform for sharing real estate data in a month and it would never get anywhere.
The biggest one is that realtors exist entirely because of regulation a the state level.
After that is the extremely entrenched business interest in the status quo - a lot of people are making money off the current system, and any real disruption is going to cost 95-99% of the real estate industry their jobs.
As noted elsewhere, the level of regulation is such that even offering better data to buyers/sellers has a lot of regulatory overhead, including physical offices with real people.
This problem is hard to solve at the root because each MLS is independently owned and heavily influenced by the National Association of Realtors and local governments. As an active RE investor myself, I'd love to work on a problem in this space.
You hit the nail on the head. Having developed with MLS data before, my opinion is there's little incentive for MLSs to make the data easy to work with. As long as each MLS gets their data distributed on the large portals that most consumers use, there's little reward to update (read: spend money on) their systems to implement a modern format to help indie developers and startups. That's the status quo. I don't think anything is going to change unless the large portals band together and spearhead some kind of industry-wide effort to fix this problem.
As an aside, I'm working on a RE investment product that solves challenges similar to this one and I'm looking for RE investors to join me. If you're interested, shoot me an email at hello[at]myname.com.
Ugh. Realtor.ca isn't really due to standardization but monopolization and control. It's a site by CREA (the Canadian Real Estate Assocation) and has exclusive rights to data from all member associations.
disclaimer: I am affiliated with rew.ca (a competitor on the west coast)
Oh, this a million times. In fact I have a huge pile of cash waiting for someone to fix this.
Right now I'm only pulling from a couple of different RETS services but already it's hellish. Regular short term schema changes, terrible lag in pulling data (10+ minutes to pull 30 listings!), low API limits (cut off for pulling image URL's too fast but they won't give a limit - just saying "if we feel it's too much", so pulling images lags for hours as I have to be extra careful. Also no option to pay extra to speed up). That's before you get to matching up fields across multiple MLS's or dealing with weird crossover effects (MLS X cannot be shown in Y from RETS #1 - buts that fine in RETS #2).
In fact, I will give you all my money right now anyway, I'm going to go live in the hills far away from it all.
Some friends of mine started a wordpress company a few years ago, and after some thrashing trying to find product/market fit, they ended up pursuing real estate SEO, MLS indexing, and easy wordpress + mobile app generation. Check out https://wovax.com/
As specific organs reduce functioning, some seniors living at home need to revise their recipes to avoid complex collections of foods just as TIAs and other problems reduce their ability to deal with the challenges. To-go food does not work. For some families, hospital-style food may mimic the diet they are accustomed to, but for immigrants, traditional hospital food may be horrific. Older people tend to rely on a very limited set of dishes, so that custom tailoring recipes may be cost effective. There are dietitian-run food delivery services, but not ones that create meals from clients' recipes. Affluent market tends to live away from their parents. The pain point is my co-worker telling me, "Food is killing my dad, and there is nothing I can do about it unless I quit this job and move home, which would be disastrous for my spouse and kids. As far as I can tell, I have to know how to solve the problem and just sit here, 3,000 miles away, and watch him die." I think this market would pay a premium.
>> There are dietitian-run food delivery services, but not ones that create meals from clients' recipes.
The problem is that you need volume for it to make sense. One way to get that is aggregating a few people who want the same recipe. The other way is ordering those meals, frozen, for a bunch of days.
I completely agree. I think that one of the big problems, aside from someone cooking a family recipe while also modifying it slightly so its diet appropriate, is somehow going beyond just making and delivering this food. My grandfather is 89, his wife is 17 years younger than him so she still cooks for him and makes sure hes eating. On the flipside, once my grandfather died on the other side of the family, my grandmother fell to pieces and alzheimer's rapidly set in. Is there somehow a cost effective way of creating these dishes either at the client's house or maybe delivering the meal and providing an hour of genuine conversation? I mean I guess it'd be a specialized home health aide at that point or something. I guess just sticking with the family recipe modified for dietary needs is the smart way to start off. You could follow this up with partnering with old age communities/nursing homes and providing this custom service en masse.
Same exact story on my grandparents. Once the healty one is gone, the other one sees a drastic reduction in quality of life as well as life expectancy.
The cost incourred for at-home help with a dedicated easter european colf were huge. Might still be a couple of decades away but i can totally see a model where you rent a house robot that takes care of cooking, personal cleaning and limited interaction (no need to have super intelligent AI when chatting to a 80 years old man with Alzheimer)
I know I speak to a computer 99% of the day (programming, games, movies, sms, maps, etc), but please kill me before my future son gives me an AI to talk to. I agree with the cooking and showing me movies, but our jobs as sons can't be replaced.
On the other hand, if the robot makes snapchats with the elderly, that would be a way to bring our parents back into our lives.
"ELIZA's creator Joseph Weizenbaum thought the idea of a computer therapist was funny. But when his students and secretary started talking to it for hours, what had seemed to him to be an amusing idea suddenly felt like an appalling reality." http://www.radiolab.org/story/137466-clever-bots/
In the meantime, parents need safe food. If anyone goes for this, good luck!
AR/VR visualization of 3D and 4D microscopy data (multicolor 3D video, as well as 3D point clouds over time) for biological research.
Look for "lattice light sheet microscopy" or "superresolution microscopy" such as (3D)-STORM or STED.
These techniques are adopted at a high pace. Groups spent $ 500,000 and often more on the hardware. They can produce terabytes of data within days, but we hardly have any tools to view and interact with it. (And the people are overwhelmed with the analysis.)
Imagine a holographic video of living cell (potentially in near real time) where you can zoom by grabbing the hologram.
I'm intrigued! How do you interact with these images right now? Do you display it on screen and rotate it with the mouse? Can you elaborate a bit more about how would a VR/AR solution add value here other than being a fancier solution?
Manipulation with the mouse if you're lucky. Often this would involve Matlab, or some specialized software. But in many cases we don't even have this and people use Fiji to scroll through the z-dimension with a slider.
For presentations people often render a movie with Matlab, investing hours to get it right. With AR, you could take a movie by filming with a virtual camera in your hand.
Augmented Reality would add the most value. Some examples:
+ intuitive exploration of the data (imagine learning about a plant by scrolling through cross-sections)
+ intuitive manipulation of the perspective (the mouse 3D rotation thing is really tricky)
+ collaborative viewing
+ annotation of objects (eg. tracing a filament through 3D by following it with the finger)
+ avoid occlusion by just zooming and moving in
+ be able to point at things in a 3D image
It would bring much more natural ways to interact with the data. Essentially scaling up your molecular structure 10^6 to 10^7-fold so you can explore it as you'd explore a sculpture.
A friend just did his diploma thesis on this. He built a system where you use a tablet device in your hand to push through 3D space visualizing the layers. I couldn't possibly describe it well enough, but I'll forward him a link to this thread and see if he answers.
Do you know where I could source some sample data? I'm developing a VR data visualization system but not in this particular context, could certainly look into it.
From the same site you can download "pollen.h5". Then follow the steps to load the dataset in "hyperstack (multichannel)" mode. This will open a window where each frame in the movie is a z-slice of the 3D image.
Then go to Plugins/Volume Viewer (scroll down) and switch the mode (top left) to "Volume". (this is what you should get: http://i.imgur.com/0QoGa1t.png)
The same people also published lots of 3D data of the zebrafish embry. Have a look at:
For Light Sheet 3D video of developing fly embryo take a look here: http://www.digital-embryo.org/ They have movies and also downloads of the raw data and Matlab pipline for analysis.
Notice that this was published in 2012. Fresh data doesn't get published before the analysis is done and the paper written..
These are images taken by confocal microscopy, where you focus a laser scanning microscope on a plane and only see that. Then you move the focus plane up by a bit and take the next picture.
Super-resolution microscopy such as STORM, STED or PALM on the other hand will give you coordinates.
A monthly subscription service for a fee of $5 - $10 where every month I get a new kind of quirky instrument (shaker, wind, wood block and stick, whistle, triangle, or other noise maker) as a surprise by mail.
The point of the service in the music industry is to inspire new sounds and the device can either be kept or given away to somebody without a lot of second thought. Getting stuck is a big problem. Also in music it's important to collaborate and giving gifts is a good way to make connections and impressions.
Great tie-in with various Manufacturers or even retailers to get rid of excess stock / failed impulse buy items / etc.
Very cool and thank you for providing input! I've already got the 'spiderweb' model in mind so crunching some numbers might be fun, err, well, informative. I'm planning to spend 2017 learning JavaScript and building small things so to have a list of personal projects seems neat. Appreciate your time!
Now that you mention it, I think the idea might be inspired by what Dolly Parton does with her reading program. A child can sign up and the program will send them books for free. Considering music and arts funding don't seem to be high priorities in modern US systems (my perspective, could be wrong) having a low-cost or charitable music version sounds like it fits nicely too.
Honestly I'm pulling up memories now of watching other people's kids over the years playing with various music stuff I've handed over (Korg Monotron synths were the biggest hit) and those are really pleasant to recall!
Eh I was thinking more USPS and less expectation of timeliness. More about the "set it and forget it until it shows up" kind of thing. Point understood though, as $10 starts to get into a competitive space (e.g. Spotify).
I work for a small high end cosmetics business in the European Union. That particular industry has a lot of compliance and documentation rules imposed on it by Brussels. My predecessor in the line sadly pushed using Apple's Filemaker for that. It's not even that bad in the latest version and certainly offers some advantages over similar solutions but the guy was horribly in over his head. I'm talking fifty fields in a table having near identical names, undocumented... everything, no clear UI design paradigms, needlessly complicated UX and storing PDF as binary data in tables by the thousands.
But I feel like I'm stuck with repairing his shit because there's not a nice and clean solution anywhere in sight. I thought of Wiki systems but the actual data entry will be done by people who would be completely put off by any kind of syntax/markup whatsoever. I'm neither good enough a Web developer to roll something similar myself nor can I dream of creating something like an entire documentation system.
I think the problem might apply to other smaller businesses in the EU and especially Germany, too. Lots of docu to have ready in the unlikely but not impossible case of an inspection.
For the cosmetics industry it'd need to be able to track ingredients, lots of external evaluation docu, internal procedures and so on. While at the same time it would need to be usable by people who are far removed from tech literate.
It sounds as though a customizable CMSm or DMS would be just the thing to take care of the fiddly bits that trip up developers of de-novo solutions (ie. accessibility, internationalization, usability, binary format document storage and indexing, access control, etc.). My personal favorite in this space is Plone[1] which has an excellent security record and happens to have many EU developers, but most of the popular contenders will do as a starting point.
Enterprise intranet/extranet apps often start out small and then spread like kudzu, eventually prompting a major project to replace them wholesale with a huge consulting-ware solution (like Sharepoint) that never fulfils the promises made (The exception being the specific niche you're asking about, compliance, which does have proprietary solutions that work, but are incredibly expensive).
To avoid that fate, try to pick something that you know can start small but you have evidence that it can also easily (ie. without a huge consulting engagement) scale to an organization-wide solution with 3rd-party as well as in-house extensions.
You could take a look at Jira. I understand most people reading this are probably cringing right now, but the cause of that and what makes Jira appropriate is just how customisable it is. You can setup custom fields, validations, workflows.
While this is probably not a too bad idea, I just can't stop myself lamenting over the customisability of Jira.
Before going bananas with Jira, either be very strict or use separate instances.
I currently work at in a relatively large organisation where all teams of various crafts and trades - not only development - share one Jira instance, since Jira is obviously what 'Agile' teams use.
Let's say that not all teams are equally equipped for analysis and generalisation.
Although there is supposedly some control process, there are now several hundreds of custom fields, many duplicates of the standard fields, and duplicates and triples or more of many custom fields.
The contents of them all are like the dwtf meme: True, False, FileNotFound.
This looks like a problem we recently solved for driving schools in Luxembourg that needed to track a lot of data about students for legal and internal reasons.
I'd be very interested in discussing the needs for a compliance and documentation tracker system and see if there is a market for it.
Have you looked into Jama software? It is customizable and very user-friendly. I use it for requirement management but imagine it would be good for document management as well.
Not sure how much startup potential there is, but in my industry we struggle with adhering to procedures, and the root cause of that is because our procedures are written in the dark by management that doesn't always have a complete picture or much insight into what they're enforcing. The problem that creates is that employees only look for work-around's for the procedures (because they are misguided or misinformed most of the time) and it creates a bigger mess than if we had no procedures at all.
It's almost as though if we had a Procedural Consulting firm that could come in, look at the big picture, and help companies to create EFFICIENT and WORTHWHILE procedures that ACTUALLY do justice to the customer requirements without breaking the bank. Then they can sit everyone down and explain the procedures and enforce the assimilation that most companies usually have growing pains with.
I noticed that when moving to a new company that is trying to grow and achieve higher levels of accreditation. Management had intended a two-way assimilation to take place between my procedural knowledge and the procedures they already had in-place. What happened instead is old procedures are etched in stone and new ones are seen as an obstacle... The assimilation was one-way and the other middle-managers like myself are mainly concerned with keeping everything the same, despite there being improvements that could easily be made.
There are probably a fair number of consulting firms you can find for that.
But another (part of the) solution is using workflow software. Something that allows you to define and share procedures, indicate status of various taskings along the defined workflows, move data and such along. This clarifies to the employees what's needed, and gives an opportunity for them to provide feedback for improvements (too much granularity, not enough, too inflexible, etc.). Management, then, also gets better insight into the actual status of projects and taskings. It's part of continuous improvement to constantly be evaluating these sorts of procedures and clarifying, culling, adding to them. Management that's unfamiliar with these concepts is just bad management.
In the manufacturing world, the Procedural Consulting firm would conduct "lean events". (NB: Many, if not most, places do this incorrectly.)
The correct way is to collect metrics (meaningful) over time, encourage and reward employee feedback about ways to improve things. Then improve them (sometimes as a move across the board, but often by conducting experiments to see how well the new concepts work, then roll out to everyone). The risk is that improved efficiency will obsolete employees, some fear this (like may have actual panic attacks about this level of fear). Instead, you need a culture that sees this improved efficiency as an opportunity for growth (the same number of employees can now do more once people get retasked to new things).
I'm not sure what industry you work in or what size company, but my old department had a similar situation.
It sounds like your processes are either immature (no one really knows what the best way to do things is) or tribal (you have to know the right guy to talk to to find the information or knowledge you need).
Premature optimization is god awful, because it locks you into a world of pain - your business is inefficient and the people actually trying to follow the procedure find it incredibly painful and limiting.
If your process is immature, you need to invest some time and money into finding out what the best way to do whatever it is you're doing. If your process is tribal, you need to find the guy or guys that understand what's happening and pick their brains about everything they know. Their implicit knowledge needs to become explicit. In my situation, I found that just agreeing on common definitions was a huge help.
If you're grossly inefficient, don't write a procedure for a bad process...you're just ingraining inefficiency. Once you've found a good process, then write the procedure so you can train your people to repeat good, efficient practice.
And maybe most importantly, train your people to the procedure but make sure they understand it on a level so they know where it doesn't apply and give them authority to deviate from the procedure when necessary.
Sorry if none of that applies to you. I'm just speaking from my few limited experiences.
In filmmaking, there's a space for a relatively small and potentially cheap(er) hybrid manual/motion control grip unit. Something between a panther dolly (no tracks), a crane and a milo motion control with a stabilised head. If you're into robotics, this is your space. Companies have started to work on these issues, but part at a time (like DJI's Ronin). It's not enough though.
Imagine a giant (3-5m reach) monitor arm that can be operated manually, but that can also remember the moves and repeat them. All of that on a mobile base that can do the same, along with a stabilisation head that can do a pitch/roll/yaw. Must be usable with and without power.
Bret Victor-style dashboard and visualisation systems. I 've been using Qlikview, Tableau and Excel for years and they are all very limited in what they can do. New dashboard solutions come out every hour but everyone is copying each other. I want something that I can mould to my problem, that I can touch and fully interact with. Hard, hard problem but worth spending time on this. Make it work in real-time scenarios too please.
Wish this jumped higher in the thread. I mounted a cheap digital projector and a web cam pointed down over a white melamine table in my workshop thanks to Bret's "Seeing Spaces" talk.
Just being able to record and play back what happened on the table has changed the way I work on things. A commercial version of this project would be huge.
Wish more people were thinking like Bret Victor. I 've been working with visualisation systems for more than 16 years and never seen anything like his software. The tool that he uses to easily construct charting within a few minutes is genius
I'm actually close to launching this kind of product. A bit more complex than Word + Git, but I hope it will be worth it.
One question: is there a use case for live group-editing? Same way live code editors work. I have to admit that I'm not super familiar of the workflows in that industry, so I am mostly just designing features the way I would like them to work.
Do you mind sending me a message when you launch? Currently work with a bunch of engineers and technical editors and this is something we are looking to address.
I'm expecting to get the MVP done before the end of this year.
The MVP is mostly targeted at businesses who have to sign/generate contracts at scale. There's a lot of features for those use cases, and then there's features for users that the parent is talking about (pure contract-drafting).
I have not talked to any lawyers yet. I think the worst part is that I don't even know how good the computer-skills of lawyers are, so... designing features by my competence feels wrong. Since I can design the UI to match the general structure of a contract, there's a lot that I can add to make the user's life easier, but it also adds more for the user to learn.
Be very careful trying to solve this "problem". While MS Word change tracking may seem abhorrent to you, most lawyers are well trained in its use and are very comfortable using it. Also, switching costs are huge. A lawyer would not only need to see the benefit of an alternative but also convince all other lawyers they work with to put aside the solution they are already comfortable using and try something new.
I should note - yes, my wife sees no problem at all with this workflow.
My understanding of the workflow is that they will send a clean copy as well as the redline to each other. Sometimes someone will in response further modify the wrong copy, and send back, which causes problems as it becomes unclear what was accepted and what was not and someone has to manually go through and check things. I hear that complaint come up - so I know there is at least one pain point involved.
The in house counsels are probably the weak underbelly of the law industry. Any startup thinking of disrupting this area should probably target the in house lawyers first.
Not a problem in industry, but rather in society: people rapidly losing jobs due to automation
We all know why this is happening. One way to try to "solve" this problem is fighting the change (Luddites), but this doesn't work. Also if people don't have enough income, they stop spending - who's going to buy products and services anymore?
Another possible solution would be that companies which automate most of it's work are owned by the community in which they operate.
Let's take banks for example. Most of the jobs in banking system can and will be automated. Imagine a country/state that has three banks - each of them is owned by a third of the population (shareholders). When a bank makes profit, each of the shareholders gets dividends out of it. The same example can be applied to other industries and in total people would get (in a form of dividends) the Universal Basic Income (which everybody is talking about these days).
Don't get me wrong, this is not communism. Companies would not be government owned, there would still be competition and private companies would still exist.
Anyway, there are also other ways to tackle the problem mentioned at the beginning. As Brexit and US elections showed us, it's starting to affect many people and needs to be addressed asap.
My joke solution is everyone tries to get into politics as a profession. You can't automate politics. It will either achieve UBI or create a hell on earth.
People cook food in their homes. You select the food from several cooks in your area in an app, drive to their house and pick the food.
I imagine a housewife/househusband that is already cooking for her/his family to prepare more portions. Snap a photo, a description, and price. It can be extra income for something that she/him is already doing.
People would use the product because it is convenient, saves times and they get more food variety.
I don't remember where or what but I remember this idea being discussed a few months ago on HN and there were quite a few reasons this is unlikely to happen. Because safety of properly cooked food and human consumption and other reasons. If you could stay relatively small and under radar. And get lucky enough for nothing bad to happen long enough to make billions like Uber. Then maybe have chance to pull off and get proper legal support/lobbyists
There are a ton of regulations around preparing food for sale. They vary from state to state. There are cottage laws popping up in some states that allow certain foods to be prepared without heavy regulation, mainly for food categories that are more self-stable and don't easily spoil (E.G. turn deadly) such as baked breads, candy, etc.
Instead of "selling" the food, though, maybe the structure could be a food club or other value swap.
This would be great for my stepdad. He has a tendency to cook enough of a single dish to feed himself for months (for example: buying a turkey after Thanksgiving, then making and freezing months' worth of turkey soup). Perhaps instead of packing his freezer full of soup, he could sell it to people on the Internet?
BI for developers. Developers run many organizations and teams now, but most Business Intelligence-ish tools are either built for business people (Tableau, Domo) or for marketing (all analytics products, specifically GA).
I feel like there's a big opportunity for a tool/tools that are installed in apps as a package and then customized from there. Many teams build a version of this in house (Instacart open-sources theirs), and I think it should be a product.
New Relic was the inspiration for this idea. I shouldn't have to put my data into NR to get value, the data is already either in my app's DB or running through my app. There are actually a lot of reasons why having it be installed and within the app itself is ideal, opens up many doors that are closed by NR. I want something that starts with that as a first principle and then figures out the way to best execute it.
I'm considering writing a web analytics for developers course/e-book. Out of the box GA is good and quite simple to implement but getting anything non-trivial out of it can be very cumbersome - if anyone is interested would love to hear your enthusiasm.
Not 100% what you are looking for, since you specify "installed in apps as a package", but we are using https://redash.io/ and love it. They are open source and very easy to run ad-hoc queries, save/schedule queries and alerts, and build dashboards.
Not my industry, but a friend of mine in law was discussing how incredible her in-house software is for managing billable hours relative to all her past companies. I poked around, and most law firms, even very deep pocketed ones, use somewhere between a bad tech system and no tech system to manage and track their work.
A small team could easily collaborate with some law firm (maybe take an investment from a few law firms), and create some very valuable software.
I'm a lawyer and a programmer and run a legal-tech startup. The largest issue I've found with lawyers and technology is that they don't understand it, and there is a lot of fear around it.
Most of the time I spent selling my software was assuaging the risk-averse mindset that exists in the profession, rather than advertising the benefits. I actually gave a TEDx talk on this point as I think it's an enormous issue in the industry.
So the ease of creating the software isn't the issue, it's the culture. That said, legal tech now has a pretty healthy ecosystem which is a delight to see!
I think it is also partly because any inefficiency in work leads to an increase in the Billable Hour. When it makes you more money it isn't really broken.
Spot on! The problem goes past efficiency and is related to the broken pricing model. The financial crisis actually caused this to change a bit as more and more clients were after fixed costs. I hope that trend continues.
I invested in a startup that did this--the first one--called http://www.urbanbound.com. they quickly realized there was no demand for this in the B2C world so they switched to become a B2B employee relocation platform (again the first to do this as a SAAS) and they just landed a Series B. Fish where the money is!
It's still surprisingly hard to send email newsletters. I want software or a service that sits on top of mailgun/ses/my own SMTP server and handles list management, templating, link redirection, and analytics. And that stores everything in a way that makes further data mining easy.
Lots of services solved the just sending email part. And lots of mailchimp type services offer a complete product geared towards marketers. Not much in between and most existing players have laughably poor APIs that make custom integrations and extensions painful.
Besides Sandy which aj0strow already mentioned, there's another solution called Mailtrain. It's open source and has some of the missing sendy features. I've been meaning to try it for a while now.
Will check it out, thanks. I could imagine an OSS based business model working here. I'd gladly pay for support and perhaps sponsor feature development.
Not in my industry but I'm part of an HOA and I swear 2/3 of what the people paid to administer it do seems streamlinable/automatable with lots of opportunity for making money along the way. Every time I deal with them I proclaim to myself, STARTUP! Then I forget about it..
There's BuildingLink in this space. I think this is the kind of thing where selling software to existing, backwards organizations is a big marginal, and it might be better to be a property management company with good in-house systems, instead of trying to sell good systems to individual HOAs or small management companies.
I'm on my HOA and I imagine the hardest part of making this work would actually be sales. The sales cycle for an HOA would be multiple months between meetings and actually making a decision. Then there's the issue of rotating responsibility, retraining, no guarantees of technical skills among board volunteers.
I'd tend to agree with the other comments that the hired out management companies would be a much better sales target since they have a real profit motive to efficiency gains. At best a neighborhood HOA is looking at periodically saving some time and being a bit better organized.
Have HOA and totally agree. So many key functions that could be improved: submit work orders and report security incidents online, pay dues and special assessments electronically, review association budget, monthly news and events, etc.
Problem is these who would benefit (residents) are often disconnected from those against it (the employees). You would need boards that really are sold on the benefit.
I can confirm. HOA software is big. It's mainly managing PDFs and sending emails out. E.g. https://www.condocerts.com
It's a decent industry my friend (not after this posting) makes ~300k a year working on it. However the race may be ending as these companies lock apartment managers into long complicated contracts. Not sure how many apartment managers at scale still need.
2 of our 3 rental properties have an HOA and its quite annoying getting postal mail about things then having to relay messages to tenants. Or you get some action to vote on but have no clue if its a good thing or not because you're not involved in the politics part of it.
Homeowners Association - Might be US thing - but basically Condos / Townhouses / even neighborhoods form associations to take care of the 'commons' / enforce a look for the neighborhood, etc... There's usually a monthly payment. Repairs / work orders / disputes are settled through them. There's bylaws...
I'm not in the industry but I would really love a better way of growing your own food indoors while minimizing as much space but growing enough to support two people a day.
Basically it is either build your own large messy setup or buy a complete novel piece of crap that will barely support a single meal... aka aerogarden.
It would be nice to have basically a large self contained opaque cabinet with drawers of growing food.
Basically I want a food growing appliance with plumbing and electrical hookups.
I would easily be willing to spend a couple grand on something like that.
This is not the industry I work in, but one of which I'm often a customer:
A turnkey package tracking system for small-to-medium shippers.
In much of the world the shipping (as in DHL, FedEx) markets are still very fragmented and do not look like they're going to consolidate all that much. (As to why, I have guesses but I don't know for sure. I'm looking at Central Europe right now but I expect this is true in many other regions).
As far as I can tell the package tracking systems are something the companies compete on, with the result that a lot of them suck or (worst case) don't exist.
Case in point: I'm currently waiting for a shipment that the seller swears they gave to the shipper, but the shipper's system doesn't recognize the code. Both parties maintain it's probably just not "processed" yet at the shipper's, going on three days now.
As a software guy, I find it crazy that nobody has a generic white-label tracking system that any random shipper can use in combination with some smartphones/tablets for label scanning. It only has to cost less than the company pays the owner's cousin's teenage son to write the tracking PHP code these companies would otherwise use, and I bet you could upsell all kinds of premium add-ons if it worked well.
I would love to see this, and I think it's a big enough market to actually accommodate a startup.
I was just at slush, and I think one of the startup finalists was working on this problem, though honestly, it's a pretty foreign problem to me.
The company is ShipWallet:
http://www.shipwallet.com/
Hey Biztos, why do you think this would be a big market? What would be some potential customers of such a turnkey tracking system? I imagine big players like DHL/FedEx would not be target customers due to their high traffic and already existing internal package tracking systems
Very late reply, but just in case you're still watching: I don't think DHL/FedEx would be obvious customers but they might be obvious acquirers if your tech really rocked.
AFAICT there are still a lot of small local and regional shippers who offer a better deal than FedEx/DHL -- I assume that's mostly on price but it could also be on expertise, proximity, nepotism, whatever -- and the deal has remained consistently better for many years despite FedEx, DHL, UPS, and other large players being in the market.
My guess is that the further away from the FedEx/DHL hubs you get, the less attractive their service and the higher their prices -- and it looks to me like more and more stuff is being sold online in these markets too, and that stuff has very low margins and needs to get shipped for "free" or at least cheap.
Furthermore, in Europe at least you have direct competitors to FedEx/DHL on the international level (at least within the EU). Players at that level have their own tracking systems, sure, but that means they have to have IT crews, software developers, etc. and I'm sure they'd much rather not.
It's probably a much smaller market but there are also specialty transporters for stuff like art and antiques. I'm sure they absolutely hate having to care about tracking systems when their value-adding expertise is so thoroughly elsewhere.
Normally I live in China, and rarely eat western breakfast.
Recently I returned to Australia to spend some Christmas time with my extended family.
A few mornings ago, I put some real bread I cut from a sourdough loaf in a toaster. Due to its irregular size, when it popped it didn't pop out completely, resulting in a sort of "toaster is too hot to insert fingers, toast is too hot to hold, toast is ready, find metallic implement to insert in to mains-powered device to extract toast" problem.
Sure. In China we would use chopsticks. But my point was not that there is no viable workaround, more that the mechanical issue should be solved.
Perhaps it is playing too much Shenzhen I/O, but I feel like a basic IC and sensors could solve this by detecting the width/height of toast pieces and continuing to release until the toast was substantially ejected from the toaster.
I think most toasters pop up using a coiled spring. Replacing it with a linear actuator seems like the way to do this. Perhaps some high end toasters already use an actuator with a laser to detect if the toast is 'up' enough. Fancy stuff
Your suggestion amounts to "work around it". That's one perspective, but also consider the very real problems of irregular ends of the loaf (where cutting thinner would result in a piece too small along the other two dimensions to be useful), loaves with air pockets that need to be cut thicker to maintain structural integrity (and utility) of the resulting toast, and people who simply prefer thicker slices.
What you want, then, is more akin to a sandwich press rather than a toaster. Or perhaps a toaster oven. Mechanically, it's not impossible, but the design of a toaster (spring loaded release, heating elements on either side) can't be altered much without turning it into one of those two things (or a poor imitation of them).
Seems a tortured equation: perhaps a vertical sandwich press without the press! I was thinking it may be possible to change the spring to another form of vertical linear actuator (eg. stepper motor driven) and giving it more controlled movement, also smarter with proximity/LOS-style sensors. Perhaps the top cover could also expand and contract. The very fact there's a spring-loaded release shows this is old tech. The market can clearly tolerate a few dollars for a stepper motor, as there are 'smart' toasters selling for nearly ~$200USD and the bottom is ~undifferentiated. See http://www.brevillegroup.com.au/wp-content/uploads/2016/10/B...
Would an ordinary stepper motor cope with the temperatures? Our cheap toaster has a handle that lifts (so isn't limited by spring return); and the socket off switch is the backup for when you need to jam a form in to reach to get your crumpet out.
In the US, most toaster ovens I've seen have about 2-3 times the footprint of the average toaster, with additional height. This is actually pretty reasonable as you can use the toaster oven instead of the oven or microwave oven for many tasks (it's a less specialised tool than a toaster).
Last I read, US homes are the second largest in the world, after Australia. In Asia, many people live in apartments and that space is unavailable / non-negotiable. In addition, microwaves are not nearly so widely used.
And yet, every tiny apartment I visited in Korea and Japan had a toaster oven. The toaster oven is an incredibly useful and versatile tool that can stand in for bigger appliances, like full-sized ovens, in pinched spaces.
To be fair there shouldn't be any voltage inside the toaster basket area when the toast has popped up. Unless they did something stupid when designing the toaster. And you can always unplug it before sticking utensils in there.
I've had the same thing happen with toasters, and at least as of a few years ago it was completely possible for a piece of toast (of just the wrong size) to get stuck such that it not only didn't pop up, the toaster didn't turn off. So the toast starts to burn, you grab a knife, what could possibly go wrong?
This happened pretty regularly as it was a European toaster designed for perfectly uniform extruded wonder-"toast" and I kept sticking hand-cut slices from round loaves into it.
Sure. To be clear the gripe is less about the potential for a shock (real or otherwise) and more about the clearly nontrivial frequency and irritation of the issue.
With irregular pieces on this and other toasters I have used, part of one piece of toast can block the further vertical motion of the carriage at the top portion of its linear range.
I bought my bottom of the barrel Wal-Mart toaster for $6 about 8 years ago. It might occasionally have issues popping out a piece of bread but it cost $6 and it toasts bread consistently.
I'm mostly holding on to this thing to prove a point to the "they don't make them like they used to" and the "you have to spend at least $100+ on X or else it will break in a week" crowd. I know it isn't the best toaster out there but I'd take it over a $189 toaster even if both were offered for free just because I'd feel like an idiot for having something so ostentatious on my countertop.
Others have complained if the slot is too big for the piece, the piece falls one side and gets toasted in one part and not in another part. That's other people's feedback, not mine.
Some toasters have wire meshes that close in on the bread to hold it vertical, but they tend to leave marks.
Surprised nobody has mentioned the elephant in our industry yet: recruiters and recruitment companies. I'm not talking about those employed by big companies, but those who are independent.
For the last year and a bit I've been contracting with a company, and the recruiter who found me has fees of around 10% per day of my work, which they bill the company.
They found me via LinkedIn, sent me a couple of emails, arranged a Skype call between me and the company, and sent me a bit of paperwork. Not bad for €10k.
I'm the founder of a recruiting software company (SnapHop) and I will tell the recruiting industry like the real estate industry is actually shockingly difficult to disrupt. This is in part because recruiting is basically sales. Except your not selling a simple widget but a massive change. A career change for many is filled with as much trepidation as public speaking (I'll find the source shortly).
The other problem similar to health care is regulation and compliance but mainly it is because the industry is a very human industry despite all the big-data machine learning promises. Getting rid of recruiting is like getting rid of sales.
The big players in the industry are pretty much equivalent to google and facebook -> indeed and linkedin. The problem is you can only make so much money on advertising so they are trying to automate recruiting but still rely heavily on a service based approach (that is you still need humans). This is because advertising exacerbates the signal to noise ratio.
When google enters the industry (which I believe they just did recently) expect linkedin and indeed to become even more service based (ie highly efficient super recruiting firms).
Consequently cutting edge recruiting software has now become more focused on marketing automation which is what my company does (albeit we are more focused on the traditional inbound aka websites instead of the annoying email nagware you have seen). That is giving recruiters better tools to do their job.
That being said my recent idea is instead of giving recruiters (or corp talent acquisition marketing) sales automation what if we give individual candidates that software? That is give the candidates drip campaigns and automated sales replies and beautiful personal career portfolio websites. That is allow smart candidates to sell to recruiters and company at scale.
I haven't figured out the monetization for it or the actual demand but I think it would be interesting and potential disruptive empowering candidates.
> That is give the candidates drip campaigns and automated sales replies and beautiful personal career portfolio websites. That is allow smart candidates to sell to recruiters and company at scale.
Not sure I follow. Large companies generally follow a pre-determined hiring process. How would automated sales replies work for a candidate? There has to be a hiring event -- a trigger, for the reply to be relevant and be seen by the right person.
Interesting. I bet this would be useful all over the world. I wonder if there's anything like this available anywhere else. The longer I think on it, the better it sounds.
The biggest hurdle would probably be organized crime. It might sound crazy, but most ports are infested with deeply rooted criminals who are involved with even the most mundane aspects of the import/export arena. Of course, even organized crime can be disrupted with the right tech.
You're being facetious but maybe you should actually consider the history behind that sort of thing.
Yesterday my friend in LA invited a few of us in Toronto to come to his BBQ this weekend. The group chat became us discussing the logistics and price of how we might do that. 150 years ago, that would have been an insane concept.
300 years ago, when Benjamin Franklin traveled from Philadelphia to Boston, the trip took two weeks. The fastest mode of travel was by boat, and he nearly died in a boat wreck.
Not my industry, but in my area. I'm still looking for a good modular house that can be setup reasonably quickly, low cost, and can survive North Dakota winter and summer. Something suitable for a single person or a couple.
These guys sell full size or tiny model geodesic domes, which go up pretty quickly on a poured platform. They are supposed to be able to withstand hurricane force winds ...
They would seriously have to be cleaned given how toxic the paint on those things is. Plus, I would rather have something not prone to giving the local inspector a fit.
I have no experience with them myself, but I know there's a good bit of interest in them for non-traditional uses. A Google search for shipping container homes will turn up a lot of interesting results. It still might give your inspector a fit, but there are resources out there. You wouldn't be blazing completely new ground.
I don't see a lot on insulating and we are a bit far from the coasts to make availability great. There was a thread a while back where these were discussed.
http://masterkeypro.com/ is sort of the only entry in the market and cores/companies-that-need-key-schedules change just often enough that say $5-10 month could gain traction.
Essentially inventory management for locks, including the specific bitting(s) for each lock? That actually sounds really fun to build, not terribly challenging, and probably has huge market reach.
One quick challenge though, my understanding is that the average org that needs this either contracts or has full time staff who manage their locks. Those are normally guys who manage this by placing the 10 cores they need to re-key on their work bench, maybe putting a tape label on, and then just going to the individual locks and dropping in a new core. I get the value as a management tool, but does this really improve things for the average staff locksmith?
Usually yes either they just go through some one like Hull Supply (the company I worked for) or they have their own department. Whoever they go through however still has to deal with this shit. Masterkey is fine if you already have a key schedule, it'll spit out as many pinning combos as it can.
at 300$ a license there's room for competition and if you could make a GUI that says calls to a prolog style logic engine to automatically create the keying hierarchy (Management needs keys to these doors, renters need keys to these doors, janitorial needs key to these doors, etc...) you could easily beat the functionality of Masterkey.
In linux land, there is Teleport, which is relatively new. There are probably some other options, but I liked the POC I did with teleport, it just needs to mature a hair.
In windows land.....I've got nothing, hence the problem.
The problem is big enterprises need support agreements and the other typical enterprisation features. An open source project alone will never fit most enterprise's appetites, especially for something as important as security/auditing/compliance. You at least need a company behind the open source project willing to provide enterprise setup and support.
Government should allow company to pay part of the salary as life long Income bond. Company will have tax credit for that. Government will pay them fixed monthly income depend upon value of bond. I.e. Deferred inflation. But this can solve minimum income idea very easily.
Software development tools/IDE have come a long way but are nothing compared to what they could be. Things like static analysis, syntax/context aware diff/merge, visualization of variable changes while debugging, visualizations in general, cross language understanding, instant compilation, reverse debugging, data store integration, remote debugging, hot reloading, dependency management, cross platform compatibility, documentation integration, design for async. List could go on forever.
It is somewhat "possible" to get a subset of the features above today but it always feels more like proof of concept rather than a complete product and you can never get all of them in one environment.
The problem is that there are so many free tools that are "good enough" so it becomes quite a luxury to pay for the last mile. Barrier to entry is very large because of this and because it is a very big and complex problem space. Would love to see some competition here, only serious actors today are intellij and VS.
My sibling was an architect in NYC for a couple years. He found it very difficult to keep on top of the different regulations which are amended occasionally. They come in books, PDFs, old janky online tools.
We're working on a startup to bring all together into a modern search engine. It's called UpCodes.
There's a lot of space in the area of construction compliance that needs improving.
Global CDN with a fixed monthly cost and a capped bandwidth. For example, pay 100$/month and get up to 1Gb/s, traffic above this limit is dropped.
Providers today offer uncapped bandwidth, some, like Amazon, without any cost cap, some, like KeyCDN, allow to set a cost cap, but take the resources off-line when the limit is exceeded.
Email parsing and extracting data from it, (input data in multiple languages), that is not strictly depending on predefined templates but being able to adjust itself. Think parsing emails from multiple providers of certain kind of service and exposing common data model. Looks like perfect usage of machine learning.
Gmail extracts data from emails with certain tags in it, microformats or one of those standars. I think OP is talking about something more flexible using nlp.
Haven't checked, but from description, looks like it's template-based and rule-based. What I have in mind is something that can be more autonomous, and still work if the sender's template has changed.
A meal prep service for bodybuilders. The hardest part of bodybuilding is eating right. Something that is affordable, nutritious, and calorie dense would be extremely valuable. There is no meal delivery service I know of that satisfies the last criterion. $10 should get you at least 1000 healthy calories
Have you considered Soylent or one of its many derivatives? Some such as Joylent are a little cheaper while others have a wider variety of flavors and there are even keto options like Ketolent and Keto Chow.
Each Blue Apron meal is supposed to be for two. Can you just eat the whole thing yourself, or is it a problem of macros + price with the existing meal prep?
We are an human powered audio/video transcription service and have lots of training data that can be used for training a speech recognition system. An ASR-as-a-Service in the cloud kind of platform where we can use our data to continuously train and improve the models would be very useful for us.
Mind sharing the name? I have friends doing ASR in college and they may be interested in your company. Though since their focus is on research/publishing papers, they may stick to standard datasets. But knowing about a new data source is always welcome.
Radio Industry. I know it's a slowly dying business, but that's because the giants are too slow to turn.
I want to be able to go to a website, ask for demographics that I am looking for and be able to purchase an advertisement and hire talent to record my commercial.
I bought display ads in Nature (the journal) a while back. There's no way to purchase without talking to a salesperson. However, I expect they want to keep it this way.
I sell advertising on my site without any automated ad network and it is fantastic.
Automating the process might be easier in some respects but I am very protective of of my readers and only want ads which are both high quality and relevant. Having to talk to me is a good filter to ensure that happens.
I want my pool cleaning guy to be able to go online and buy a radio commercial advertising his business. If he has to go through a salesperson and negotiate and all that, you've already lost him.
I think there is a ton of money sitting on the table waiting to be claimed for this opportunity.
Why do you think there is a ton of money for this? Currently buyers go through salespeople--it's not elegant but not it is working. How much more money would an automated system be able to extract, given that some buyers are comfortable with doing it in-person?
Right now the small time business money is not on the table at all. Couple of reasons for that.
One is that it's expensive to buy. One cause of that is that you have to pay sales people. If you remove sales people layer, you would lessen the cost of the ads and allow these people to come to the table.
Second reason is that dealing with sales people is a hassle. Their incentives do not align with those of the buyer. Salespeople do not want to deal with small timers because commission is too small.
As a SaaS provider, one of the key indicators of a customer at risk of churn is the presence of another competitor in their account. A service which notifies you once an account signs up for a competing service would be immensely valuable in helping to target retention activities.
1) how about you find ways to make retention a continuous objective
2) get used to the fact that not all customers will be retained. I try stuff all the time that I have no idea if it will solve my specific problem. If you do #1 right, I would know that you want to hear about my specific problems and offer ideas on how your saas can solve them. That's the most you can hope for if your service doesn't immediately do what I initially hoped it would (or is more burdensome to implement that I thought, etc)
I can think of a solution that's VERY VERY anti-consumerish.
Being a SaaS provider, create an extensions that you have to install (something consumers want...). This extension should have the permissions to read history / or urls visiting.
Have a blacklist transmitted of urls (so you don't need to transfer the user's data back to the server) and match with it. If it matches...you have to work on that consumer a lot more.
You could employ some sneaky tactics to ferret out if a customer's browser visits a competitor's website... without letting the competitors or the custom know.
This sounds like a massive invasion of privacy, to me. It also sounds like a great opportunity to drive customers further away when you contact them based on that information.
An easy way to design things using catalog mechanical parts.
I imagine being able to create a structure or assembly from stuff in the McMaster-Carr or ThorLabs catalogs without having to be a CAD expert. When I'm satisfied with the thing on my screen, I press a button, enter my credit card number, and the parts arrive from their respective vendors in a couple days.
A search feature would be vital, of course. Being able to modify some parts would be useful, e.g., if I need 14 inches of pipe, I can cut down a 24 inch piece. Drawing from multiple disciplines would be necessary, e.g., combining an electrical box, optical assembly, and structural framework.
Maybe you get paid via a little kickback from those vendors, who also agree to integrate their catalogs with your service.
I'm in the AI industry and our problem is that we have so little grasp on what intelligence actually is, that we stumble in getting our machines to mimic portions of human cognition even on the vector space level.
I do embedded Linux porting. The big job is writing drivers and building the device tree. This would be so much simpler if schematics could be annotated with scrapable clues about pin connections and device parameters (e.g. device tree fragments). Its all derivable from the schematic and chip specs, but that part could be automated almost completely.
It would put me out of a job, but better than shooting myself in the head next time I have to face 40 hours of slogging through data sheets.
Manufacturers are starting to produce register maps in the form of SVD's (an xml doc?). Would starting with this not be a good start in producing the FDT?
Nope. But fortunately most chips have widely available datasheets. Except Chinese knockoffs which can involve lots of sleuthing and black-box testing. Had to write a camera driver for a clone that had spotty support (a poor copy of a better device), took a month to get it right.
It's not really start up material but Sage/MAS is basically crap for companies that need custom work. The door warehouse I worked at basically needed 2-3 parallel system (1-2 of them being us scribbling on paper) because we custom fabricated doors and MAS was basically useless for passing that kind of thing around.
P.S. If any of you are in Austin I recommend Hull Supply, they're good people.
It's not my industry, but I've been thinking about my health habits and a WaPo article made me realize I don't even know what a healthy weight range to target is[1]. BMI ignores bone structure and muscle and the most popular/affordable body fat estimations are +-10%. Is there something else out there to help make an individualized weight goal? Or any other health goal?
Tape measure and formulas is better than both weight and BMI. And while the errors can be large for some, you won't mistakenly think you are fit at 35% nor fat at 10% using them. There's actually not very much good knowledge out there on exactly what levels are optimal for health. It's mainly based on epidemiological research. However, as long as you aren't below 10-12 % BF (ca. 84 cm waist) there is probably no danger going down. If you are going towards 25% BF and above(ca 98 cm waist) you probably should see over your habits.
If you want to keep it simple, just set some random measurement in-between those numbers as a limit and start dieting when above.
I don't think the big issue for regular people is that they have tried measuring themselves but by random chance got an inaccurately low estimate, rather that they haven't really tried measuring or does not succeed in losing fat. The latter is sad as losing fat is mostly a solved issue, but the misinformation is so high that it's hard to get on the right track. If someone could solve the information problem they'd be a public health hero!
For people into body-building or serious fitness, the issue of estimating fat is a bigger one. Some people go for DEXA scans, but they are quite expensive and takes time. Something doable in your home, but more accurate than tape measure or callipers, especially for tracking variations, would be the ideal.
The ones poorly served by iphones and androids. The ones that see gadgets as a tool and not an identity. The ones that want to augment their abilities without enslaving themselves.
One problem from my past experience at architecture industry:
every country has it own rules & regulations for represent drawings of designs(documentation)(many small things like floor plans, sizes, fonts, frames, symbols descriptions all are different) as result CAD software developed in one country very hard to use in another without lot of weird tricks which works only for subset of original functionality.
I believe it may be solved be separating actual building model and view/render representation like AST and interpreter. But unfortunately is super hard to do just because that AST model should include lot of complex stuff.
I recently had to reserve six hotel ballrooms/conf. rooms for focus groups. Hotel sales offices suck. They are not responsive, sometimes taking days to respond, they have old technology (if any). They require signatures on multiple documents (almost none use online forms). For some inexplicable reason sales and catering managers all have assistants, who by the way, cannot actually do anything or give you any information, they are just clerks to take info. It was a coordination nightmare to set up all six locations and dates. There is great opportunity here to automate and streamline this process.
Many doctor offices still have horrifying paper record systems, don't use e-invites, etc. Companies are fixing this but the market is still huge.
I don't know why the few companies that are in the IoT space for oil and gas aren't scooping up literally billions in missed opportunities for sticking cell-enabled sensors on oil platforms, fields, etc. Then there's the next billion dollars waiting for whoever starts sticking controllers next to the sensors on valves, etc.
Drone inspection and repair in oil and gas. People are doing this but it's taking way too long to take off for how much money is there just waiting to be picked up.
Sticking a laser scanner on a drone. Again, people are doing it, but what the fuck, there's so much money just sitting there.
If you're looking for contract work, just start browsing random EPC websites and calling up the shitty ones. Probably a good 10,000 at least that are still rocking 1995 crapsites, and not in the "good" "low functionality low load time" way, the "using tables for layout" way.
Trains should be automated. They already are in Taiwan for some lines, it's been feasible for years.
Motorcycle safety is still subpar for where we are in material science. There are a lot of riders out there that will pay buckets for greater safety. We've figured out how to not get our skin ripped off but I believe there's still a market for preventing broken bones, spinal snaps, decapitations, and the like.
Somehow we still don't have GUI HUDs in our moto helmets. Like BMW is working on something but honestly it would be a relatively simple and profitable thing for a very small startup (2 man team working part time). Literally even just casting your smartphone screen to the visor would be enough to get people buying so they can have a HUD map and shit.
Big market for motorcycle storage solutions. I giggle whenever I see someone with an ammo box strapped onto their sports bike. We already dropped hundreds on gear and thousands on the bike, we spend many more hundreds or thousands on modding the things, there's ample opportunity for more elegant and functional storage solutions.
VR allows for limitless desktop screen space in a portable package. I'd like to be able to bring an HTC vive and a tiny screenless box to plug it into that would allow me to have a "multi-monitor" setup while I travel. Some say the resolution isn't there yet, I say make the text bigger. I have no problem reading the stuff in steamVR.
I get made fun of every time I bring it up but I'm convinced people are stupidly ignoring lighter than air travel, transportation, and data distribution, especially in an automated sense. Google has their wifi balloon thing but they dropped their blimp transport truck project. I think it could've been a thing.
There may be some margins available in teaching low-income people how to cook and eat basic foods instead of frozen meals. It took me to getting scholarshipped into college to realize that we were losing buckets of money eating frozen meals and fast food because we thought it was the "cheapest option," not to mention how unhealthy we were for it. Think like somehow getting low-income folks to buy potatoes, onions, peppers, dried beans, cheap cuts of meat, etc and demonstrating how it's faster, cheaper, healthier, etc. Potentially a gov funding opportunity, would save on EBT and healthcare costs.
Kids learn by doing. Good luck changing anything about education in the USA though.
Someone would be able to take over any industry in Taiwan that they please if they start up the company and instantly pay 2x local salary, give PTO, and have other basic benefits that we take for granted here. It wouldn't be much, you'd be paying ~44k/year USD for an Engineer, for example. You'd be able to poach the best talent, you'd draw shitloads of negative press from pissed of old Taiwanese businesspeople (no press is bad press), everyone would be telling you you're wrong, and you'd Donald Trump your way straight to the top. Think Scranton Oakmont or Gordon Ramsay as well. Just don't break the law. Also learn Chinese.
GP offices: Plenty of solutions already - generally regionally based or by payer. Also an issue with vets, hairdressers etc - pick an appointment based niche. (Have invested in 2 niches, advised others)
Wireless sensors for industry: Thousands of players here, including network providers and different tech stacks. The game is to find a niche that is easy to enter and pats well. (Have invested in wireless water metering, advised others)
Drone inspection: Plenty of players here, but it's a find-a-niche that you can sell to play, and you need operators for each application. (Have advised drone and drone application companies)
Anything on a drone: Drone's don't last long in the sky. (as above)
Trains: Trains get solved at the political level. Good luck in the USA. (am train lobbyist)
Motorbike safety: Check out where Dainese and BMW are going - air bags et al. I spend a fortune on BMW gear. (Motorcyclist)
GUI HUDs: The main player failed (Skully - internal issues) . I know of at least one early stage company attempting a generic HUD solution. (have met, not invested the early stage one and decided, luckily, that I didn't like the Skully people and did not pre-order.)
Motorbike storage: Has been a huge market since the 90s when you could only get adventure boxes from one supplier - in Munich. Now the problem is solved by countless suppliers and manufacturers - unless you buy a sports bike, which have a very different use-case. My first adventure trip was in 98 and I needed to self-assemble, now you can ride out of several branded dealerships with a capable round the world bike. (have motorbiked round what world - in pieces)
VR: The use-case is really only games so far. See Playstation. (have 2 VR kits)
Lighter than Air: Capacity and cost versus shipping are the issue, while air-cargo is well solved. Meanwhile the Helium market is stuffed. But I love the industry too. (read Bill Gate's recommended book on international shipping etc.)
Low income cooking: Those low income people are time-poor, poorly educated (the US system is dreadful) and marketed too by monster franken-food companies that price under the cost of decent food. Yes - its a hard problem to solve. (don't get me started)
Education: Indeed. Start with politics and how schools are funded.
Taiwan: (sadly) Why would you pay more than $1 over market?
EPC means "engineering, procurement, and construction" company. So like, lyondell-basell or jacobs engineering or foster wheeler (now AMEC) or Mustang (not the car) or KBR.
PTO = Paid Time Off. Catchall for vacation time, sick time, holidays.
Antibotnet -- software products and hardware gizmos to detect when IoT gizmos get pwned, recruited into botnets, and then the botnets are activated to attack KrebsOnSecurity or somebody.
This stuff should be priced for home and small office use.
"BEEEP" Why is your laser printer's ethernet port issuing tons of SYN requests? Unplug that printer! Factory reset it!
Checkout https://www.lodr.io. We focus on loading files into RedShift. The data processing engine is built on a serverless backend for faster data loads at better prices. You don't have define schema, data types etc. Drop file, transform (optional), and load.
Vehicle purchaseing. I despise having to negotiate a lower price when both parties know what the vehicle is actually worth, what incentives are available and that the salesperson is bullshiting the buyer. My soul dies a little every time I have to buy another car. I wish someone would put dealerships out of business somehow.
Enterprise passwords/keys sharing with an audit trail, non-repudiation, expiration and all that jazz. Most (all?) companies I know, including ours have challenges sharing such info. Current solution is mostly to use google drive or dropbox in most businesses I know.
Industrial heat treating. PID controllers with expansive reporting software. Whole industry is scrambling right now to make their own or buy systems upwards of 10K.
Sales. It's crazy that selling products requires the build-out of entire sales, marketing, and support organizations when most companies could be well served by an Uber for Sales.
Anything that has to do with mining engineering and the use of software is just ripe for startups, getting the industry to adopt thought is a different story.
One of the best places to look is where an emerging platform overlaps with existing specialty fields. For example, VR is currently an emerging platform, here's a few things that maybe someone is pursuing but there's no established solution for:
- 3D Storyboarding for VR cinematic storytelling (mixed desktop/VR)
- VR home tours for real estate
- VR sports training for golf/tennis/baseball
- Rehearsal/Staging VR for event planning.
- VR Training for DIY/Construction.
etc.. In particular if there is a field that you personally work in or know someone that's a great place to start.
All these ideas are wrong, because they are cash grabs. Money will never satisfy you. Find a startup you love by looking back to your best memories(10 years ago or more) of creating something and seek to give people that experience. Then, even if you fail financially, you'll still have more great memories, 10 years from now.
I fail to see how industry problems ripe for technical solutions equate to cash grabs. Problems are essentially the basis of every startup (if we generalize a little), and I can assure you that engineers can be passionate about solving a problem.
I also don't really understand what you mean by giving people the experience of something you were creating in the past. What does that mean? As advice it seems exceptionally vague; something an writing teacher might tell me to write about, but not solid advice for founding a startup.
Downvoted because your comment is only tenuously connected to the subject: which problems in your industry could be solved by a startup? We're talking about problems that need solutions, not the worthwhileness of the financial rewards from problem-solving, an abstraction and question that is wholly irrelevant to the topic at hand.
This is the best thread I've ever read on HN. It's like going to a conference without the time and cost spend. How about a product that does this thread once a week. Half therapy half happy hour.
We been talking about doing it monthly. And I have been talking with a few people about creating a conference around it. No pitches, no ted-lets save the world speaks. Just problems and people who are looking for problems to solve.
Regulations will unbundle research from liquidity provision starting sometime in 2018 (moving regulatory target, but theme seems clear). Sellside will have lower incentive to pay large research groups; buyside will have to pay explicit fees for research advisory. There is a big opportunity in providing platforms, with macro and market data live, where researchers can interact with capital managers, given that said researchers will likely find themselves bereft of their current distribution networks (bank sales forces) sometime in the next few years.
Equities already have a hybrid form of this where buyside earns "credits" to be allocated at the end of each year to research providers. But fixed income is at least twice as big as equities, is much more opaque because is essentially unlisted (mainly "OTC" = "over the counter" ie only those in the know), and therefore much more susceptible to disruption.
apologies the only "CSA" I know is the Credit Support Annex between counterparties, which seeks to minimize risk on mark to market of derivative contracts. Is this what you are referring to?
We started StartupCTO.io as a way to address this. We saw the same lack of support for CS grads thrust into leadership, so we're doing something about it.
Too many startups are trying to disrupt the industry, thus creating the problem, that cannot be solved by one more startup. Its the same problem with, say, capitalism. You cannot solve it by organising one more political party. I am not saying capitalism is bad. Its a problem.
Without being P.C., I have to assume you mean poor people. And TBH, I've had the thought that there must be some big opportunity out there to address for that segment. But I have no idea what it might be.
I feel they live very differently and would spread ideas among their network when some app/service helped their live (cheaper, easier). I have some friends and family that fit into this mold and it's obvious they have a different thought process when making financial choices, even simple ones like shopping for groceries, and they depend on things that >=average earners don't.
And, it seems most of them have smart phones these days. So it's probably only a matter of time before something hits big with this group.
Most apps are for what many would say are "rich people." You pay someone to drive you? You pay someone to run your errands? You pay extra for grocery delivery? Eating out is expensive enough, you paid extra to have it delivered? It's for people who can afford convenience. Opportunity cost is not a real thing for poor people because they lack opportunity.
You gave a very vague, short answer to something that really needed a better answer. A story the poor are dealing with. A volume market possibility you were hoping someone would supply. The downvotes are deserved.
You could always give a good paragraph answer that everyone else is contributing to this otherwise excellent post. Maybe the downvotes would disappear.
Do you have something specific in mind, regarding the low-end of the market? Maybe a couple of examples? I'm really interested in understanding what you mean.
I don't understand this idea that everyone needs jobs, all of the time. Ideas are not created so that jobs can be created, and workers on a societal level don't work so they have jobs. They work so the business operates and serves society. Why not develop ideas and businesses that provide value to all of society and provide benefits such as a basic income that compensate for the fewer workers needed?
> Why not develop ideas and businesses that provide value to all of society and provide benefits such as a basic income that compensate for the fewer workers needed?
Because this is a pipe dream that ignores the decades long of suffering it will take to reach that state, if ever.
Certainly, but automation seems inevitable at this point because it actually is beneficial to most of society, in terms of productivity. I don't think becoming less automated to save jobs is any less of a pipe dream.
> I don't think becoming less automated to save jobs is any less of a pipe dream.
It may seem inevitable. Until you have an entire generation of the work force put out of work. Then the only thing that will be inevitable will be civil war. We always need to have work for people to do. It's fundamental to our nature. It's the modern expression of hunter/gathering.
But there's no real precedent for an entire society needing few workers, so I'm not sure that either of us could guess with any certainty what would happen. I'm not saying things will work out, but it's never been tried to the best of my knowledge.
Yeah, I don't mean this disrespectfully, but that's probably one of the worst rationales for trying something, at least for something important as this.
I agree. I'm not the one trying to change the status quo. I'm not actively invested in this cause because I don't see clear dangers yet. Jobs are naturally becoming more automated by nature of productivity and efficiency, so if you want to prevent that you need to make a stronger case than "it may not work because we've never done it before".
These questions pop up every now and then, and while I get the intent, you're not gonna get anything useful out of it. The idea that someone who knows how to code can disrupt an industry that they are not a part of is disingenuous, and the examples that you can find are exceptions, not the rule. It's also extremely naive and presumptuous...what makes you think people in the industry haven't already tried? People who fall ill to this delusion end up in one of two categories: those that attack easy problems with tiny markets, and those that attack hard problems and spend decades learning about and becoming a part of the industry before they solve them.
As someone in the supply chain and logistics industry, I can list for you hundreds of people that know the traveling salesman problem and precisely why its not applicable to their situation. I know hundreds of people that already know how to better manage their safety stock than someone who suggests using Gaussian demand models. I know hundreds of people who can optimize last mile delivery costs orders of magnitude better than a drone engineer. I know hundreds of people who can manage inventory distribution and ordering automation better than someone who knows databases.
And sure, there are companies out there that are doing everything ass backwards and could use some help, even if it is primitive and simplistic. And when they decide to look for it, who are they gonna choose: the guy who saved Amazon $500M/year with their truck load optimization expertise, or you, with your shiny website and a trick you learned from a textbook?
So as a piece of advice, if you aren't part of the industry already, don't try to do B2B in that industry. B2C is fine, because as a consumer you are ostensibly a part of the industry...but B2B is a death march.
I'm sorry, but this is rude, discouraging, and flat out wrong. This sort of stuff happens all the time. I personally work for a company whose founders implanted them in an incredibly calcified industry despite no industry experience, and I can rattle off dozens of successful examples of this.
In fact, this very attitude is the exact reason why there have been industries that are primed for "disruption." If only those within the industry, burdened with preconceived notions and patterns of thought, try to improve an industry, you are very rarely going to see revolutionary change.
You don't need to "disrupt" anything to be a very good business, but I really resent this "leave it to the pros" attitude.
>I'm sorry, but this is rude, discouraging, and flat out wrong.
I'd rather people feel slighted than to have a bunch of kids fresh out of college trying to disrupt, say, the health insurance industry.
This isn't about "fresh eyes". The problem with having no domain knowledge is that you can't identify many of the issues that needs solving within an industry. Every job I've ever had, I see issues that I can solve with my experience.
The idea that someone who knows how to code can disrupt an industry that they are not a part of is disingenuous
Uh, no. The idea is that you identify the problem, build the team to solve it, build something quick that can prove that you can solve it in a better way than others, iterate to the point that CAC < LTV, scale.
How can you solve a problem that you can't adequately analyze or characterize? How will you know if you aren't trying a failed solution if you don't know what has been tried? How will you know if your simplifications and assumptions are realistic if you haven't seen how they've played out in the past?
Often they don't have the tech skills or desire/will to start something themselves, but are happy to join you if they recognize that it's a big enough problem.
If your expertise is organizational and can put together the right team with those that do have the domain knowledge, I'll concede the point. But domain knowledge is irreplaceable if you actually want to solve hard problems with scalable markets.
I sort of agree. But it is also true that there are many industries and groups that are way behind the state of the art in technology. That may be for good reason, or it may be because it hasn't worked for them yet, or may be because it can't work, or because they avoid tech, or...umpteen reasons.
So yes, a CS grad probably wont come along and revolutionise the Supply Chain industry, but mainly because SC has been at the forefront of areas of tech for decades. A CS grad may, however, come along and revolutionise...well just look at all of the recent famous tech companies...communications, travel, social, arts and crafts...whatever. Many areas are ripe for fresh eyes and new approaches.
I do agree that domain knowledge is often massively under-emphasised though.
before reading the comments i would have agreed but man.. there really are a lot of juicy suggestions.. not just trivial ideas like another travel app.
This just as wrong as anything can be and you have nothing to back it up. Furthermore you seem to have missed why they are asked. The problems in many industries er hidden problems which can only be understood by those in the industry, but the solutions are often made by people outside the industries in partnership with insighters.
So they are imensly useful and many time more relevant than most other discussions about building companies.
I probably shit on the idea more than I needed to, simply because it's entirely possible that you could be looking to form a team and rally around a serious problem, and that team could be composed of people who deeply understand the problem in addition to fresh faces with a different approach. And I would fully support that approach.
Lately I've been approached by a lot of "startups" who have made it inadvertently apparent that they know about a serious problem in the industry but have no experience in the industry, are taking absolutely idiotic and long-discredited approaches to solving the problem, and don't even speak enough industry lingo to be understood. These are the people I tried to address with my comment.
If you'll notice, I was actually the top commenter on your original discussion [0]. I still stand by everything I've said in that comment. It is a potential multi-billion dollar idea, and the market size can be verified by pretty much anybody with basic market research experience. And while I have plenty of valuable and marketable experience in the Supply Chain and Logistics industry, even I am well short of qualified to lead a company trying to tackle that problem. I would definitely consider being a part of a team that was formed to solve the problem, but here's the thing: If you learned about the idea from an Ask HN thread, I almost certainly wouldn't join your team. Maybe if Elon Musk vouched for you and Marc Andreesen was throwing money at you. Maybe if you and 9 other committed engineers had a collective 100 man years of proven experience solving hard problems across a variety of industries. But trying to be the leader of a team trying to solve a problem that you didn't know about and don't fully understand is a huge obstacle to overcome. And I think it is perfectly reasonable of me to try to dissuade the average HN reader from even trying.
I think this is were you are missing the bigger picture though.
The solutions to the problem in your industry might not be based on the skills or insights of your industry.
It might be that someone recognizes the similarity from another industry where the problem was solved in a specific way or that the insight just so happens to be solvable based on the knowledge of some specific technology.
And it's NOT about ideas it's about recognizing the real problem that hides underneath and which requires experience to understand.
I think both of you are talking past each other here.
The crux of your argument is that an outsider may be able to pattern-match better than industry insiders/domain experts. In other words, an outsider can recognize a potential solution by translating his/her experience with an efficient solution from a different industry.
He is saying that you need to properly understand things from the perspective of an industry insider before you declare that your solution is disruptive. In other words, you need to understand why things are the way they are before talking of reformation. This is the central lesson behind Chesterton's famous "Taking a Fence Down" [0] quote.
1) Cleaning the data as it comes in rather than in batches so we can use it sooner, invalid data is discarded, outlier detection, normalizing inputs etc....
2) Warehousing of the data with proper indexes so you can perform some advanced queries on unstructured data
3) Some data is sent in bulk at the end of day, some of the data is streamed in fire hose style. How can we preprocess the fire hose data so that we don't have to wait until the end of the day to parse it all.
4) Oh and all of this data is unstructured and comes from 75 different sources.
Soon the average hedge fund will have more people just cleaning and managing data than they do in quantitative research, dev ops, software development and trading.
Oh and lots of the data is considered proprietary so while AWS/Azure, etc is fine, sending it to a third party to process is not.
TL/DR Help me, I'm drowning in data. How do I get the time from when I acquire data to when I trade based on it down to a reasonable time frame, where reasonable is closer to hours rather than days/weeks.