2 years and instead of adapting one of PLENTY DSI screens they opt for convoluted DSI to parallel conversion, with 10 year old resolution to boot, color me not impressed :(
This is just like when they released camera module. Instead of opening MIPI interface to the developers they shipped binary blob locked to one particular camera module from one vendor, because fuck you thats why (well, actually one of rpi/broadcom engineers said something like "people wouldnt be able to figure out how to color correct/debayer because its trade secret of camera module manufacturers, so why bother")
I'm dreaming of general availability of touch enabled e-paper/e-ink displays.
Not just for the Raspberry Pi, but especially with it I would be one step closer to my always-on home dashboard.
I would personally pay a lot for an e-paper based laptop specifically for text editing. For writers, or programmers, I don't think you can beat e-paper. No eye-strain, outstanding battery life, perfect visibility in direct sunlight...
Pixel Qi tried and failed, though you can still buy 3Qi panels (IIRC 7" or 10") from tripuso (who bought the rights when Pixel Qi folded) and swap the display of a corresponding-size notebook.
Nice! I've never heard of those. Pretty expensive for a 10" monitor though, found them on aliexpress/alibaba for ~100$. Adafruit stocks them too for 180$, but with a driver board included.
Likewise. I'd also LOVE something like a cheaper/smaller version of Sony's Digital Paper to replace my notepads/sketchbooks. SIDE NOTE: I keep a Thinkpad x220 with a retrofitted IPS display and a 9cell battery for dedicated writing/programming. It's got fantastic viewing angles, an amazing keyboard and 10hrs+ battery life! You can find great deals for a used x220/x230 (and its IPS display) on ebay. Worth every penny!
I'm actually thinking of buying a cheap hackable ereader with USB OTG support and putting Debian on it, precisely for this purpose.
(It looks like the Nook is probably my best bet, but it unfortunately doesn't have a powered USB port so I'd need to find an external power supply of some description for the keyboard. Any other suggestions gratefully appreciated.)
I'm trying to figure out how to turn my MailStation (320x128 monochrome LCD + keyboard for dial-up email) into a typewriter, but I don't know the easiest way to get the text out of the machine.
I really like writing on my eMate 300. Distraction free (as it does little else) great keyboard, all-day batteries and the word processor is surprisingly fully-featured.
Also guaranteed to cause comment as practically no-one has seen one before.
eInk Carta panels apparently exist in sizes of up to 13.3". I wonder whether it's possible to get them with standard LVDS/eDP connectors to plug into off-the-shelf laptops…
I've always wanted something like this as well. I saw an article a while ago where a guy had a Heroku app (i believe) and pointed his kindle browser to the app and was using it as a word processor. Latency wasn't too bad either.
That's why I think a system aimed solely at text editing would be perfect, basically like a Kindle but in the form factor of a full-size laptop.
I'm imagining literally just a CLI or rudimentary text editor with an e-paper interface... no video or images, just text. Very spartan. Response time is more than good enough for something like that.
Wouldn't it take 250ms between typing a character and it showing up on the screen? That seems kindof annoying.
I could also imagine that searching through a textfile to find the place you want to edit would be pretty frustrating if you had to do it at Kindle speed.
Not necessarily. On the Kindle it takes about that long because it uses greyscale mode, which is slow, and also flashes the screen first to reduce ghosting, which doubles the time.
In 1-bit mono mode you can flip a pixel in ~100ms, which is plenty fast enough for text entry, although scrolling might not be good.
Early LCDs had terrible ghosting (anyone remember trying to play Doom on an early mono LCD and the image instantly turning into incomprehensible hash?), but I can't find any references for actual numbers for the pixel flip time. I'd be interested to know how they compare to eink. Does anyone know?
No idea about the actual numbers, but people have gotten ps1 games working on e-ink displays. I'd think that'd indicate it's probably perfectly doable for text editing.
Yea emulation would take more than the cpu on the nook can do, but it really does show that the screen is capable at least. Might never play Crysis but it should be adequate for a lot of thigns.
I will say the market might be too small. But if the market existed slow response times (which are still sub-second) would likely not hurt it too much for that usage.
So the hardware is OK but it requires a driver which also happens to be rather buggy? They should've just made it plug into VGA/DVI/HDMI like a regular monitor, no software required.
Eink doesn't work like that - you have to explicitly tell the display which areas you want to refresh, and more importantly, how... Emulating HDMI would be a tonne of work and also it wouldn't work very well.
That could be done in hardware. They could put a double framebuffer in it, one that contains the frames of the input signal and another with what's currently displayed, repeatedly diff'ing them to send the right signals to the display. The video linked in this comment https://news.ycombinator.com/item?id=10187876 shows a Nook's display being used for video, so I think it's quite possible.
Really like the idea, but the case design is... odd.
For some reason the shiny, curved black plastic reminds me of a weird cross between Darth Vader and something a character in a Terry Gilliam film would use. Stick a Fresnel lens over the screen and you're all set.
Always-on displays that need to blend into their environments, like a smart thermostat, and where power consumption is a concern.
Even in non-power-constrained environments e-paper can be preferable - the glow of a typical LCD display is really distracting and doesn't fit in a lot of (most) environments. Have enough gadgets around with LED displays and indicators and your home suddenly seems like a really bad 80s movie.
Nah, sometimes I just want to check the current temperature. A persistent problem with LCD-powered devices is that because of power consumption or general visual obnoxiousness their screens need to be kept off until use, but there's always a time lag.
One of the more glaring problems with smart devices - smart watches especially - is that they have have a slight pause before they realize they're being used and activates. An always-on device doesn't have this problem and can be immediately used without delay.
By themselves they're not huge issues, but as you own more and more such devices it becomes really annoying to be surrounded by "smart" inanimate objects that constantly make you wait - even for a brief moment - before they are usable, and in that way they seem much less pleasant than the dumb objects they replaced.
Imagine having to rest a pen on paper for 500ms before you can begin writing, every time.
I'd say that lag only exists because the UI is running on a full multi-task OS. My microwave display doesn't take any noticeable time to wake up when I rotate or press a button, and neither did my radio or my VCR.
Smart but task-dedicated devices should probably run the display/buttons on a dedicated chip that talks to the CPU asynchronously, to avoid this problem.
The lag isn't UI-related or related to the specific software stack IMO. We've had to technology to build extremely lag-free devices for years now, the lag is more fundamental than that.
Your microwave and VCR are always on - their displays are also always on, so they don't need to be woken up before being used, they are already awake.
A thermostat with an always-on screen is the same, they're already displaying information and the touch screen can be immediately used without delay - but we don't want the display to be always on, because in general LCDs look awful in their environments. In general people prefer their homes to not look like server rooms with blinking lights and garish panels everywhere.
So there is a non-technological need to keep the screen off, and this presents a fundamental problem - before anything can be done, the screen must first be woken up. No matter how fast you make this process, at the end of the day is one extra step that the user must do before the device can be used.
Apple Watch detects when your wrist is held in a certain position to activate - but sensing this gesture itself introduces lag. Hell, even a simplistic "on" button on the device itself - even if the response is perfectly immediately - necessarily requires the time for the user to trigger the action itself.
You can't work around this with better software, because at the end of the day "activate device" is step 1, whereas with dumb technology step 1 is "use device".
The only real solution is for your devices to be always-on in a much more fundamental way - like your VCR or microwave - and that in some cases means e-ink screens.
No, the display of my microwave isn't always on, it automatically shuts off after a few minutes; yet, it's still fast enough that I can't perceive any delay.
It goes from complete shut-off of the entire device to moving animation in less time, from what I can tell, than it takes for him to move the button until the end.
EDIT2: I measured it frame-by-frame, it takes 5 frames from the moment the button starts to move until the screen is completely functional, showing the proper image. The video is at 25 fps, so that's 200ms, for the whole device to go from non-powered to fully working animation.
> "so that's 200ms, for the whole device to go from non-powered to fully working animation."
That's 200ms more than 0ms, which is the status quo for dumb devices.
> "yet, it's still fast enough that I can't perceive any delay."
Right, because the inputs aren't on-screen. Touchscreen devices inject more delay because you have to:
- wake the device
- read the output
- decide on input
- press the input
With a device like your microwave (or your arcade game) steps 1 and 2 don't exist, because the inputs are visible even when the screen is off. With your microwave you just walk up and start punching your desired buttons right away, the screen will catch up to you quickly.
And this is also why this is fundamentally a product design problem that isn't fixable via simply faster software - a non-primed human will take 500ms+ at each of these steps just in reaction time, the 200ms wake-time and whatnot is minor in comparison to the delay caused by having the human take multiple steps to do something. This is fundamentally about modes of interaction and not really about software performance.
We are at a stage in tech development where human delays vastly overwhelm purely machine-caused lag time. This is why compressing multi-step processes into a single step (or eliminating them altogether) is so valuable in terms of improving usability. Conversely - and this is something a lot of futurists just don't get - injecting additional steps into the use of something, even with very high performance software, results in disproportionate delays, and makes the product overall more annoying and cumbersome to use. This is the heart of why nearly all smarthome devices have been utter failures so far - despite doing something useful, they dramatically increase the human load on their usage.
Have you ever tried to set one of those digital mains timers? The designers always seem to leave a couple of buttons off, so the buttons all do three things, and you can never be sure which mode you're in - and I know people who have no problem writing code who have given up on them and bought a mechanical timer instead.
Now, you could have a single home control display - or perhaps a single display in every room. And they'd all show the same information, maybe customised for each room, and include a few extra pages for setting up timers, lights, proximity sensors, or whatever.
There are plenty of applications for this kind of IoT, but the tech just hasn't come together yet. I think the lack of good, cheap, large, low-energy displays is more of an issue than wake up times - because if the display uses very little energy, you can skip the wake up time.
You can more or less pick any two from the list, but getting all four seems to be impossible for now.
I honestly think it's less about the energy usage of the displays and more about the general visual annoyance of typical computer displays.
Sure, yes, one reason we keep displays off most of the time is because of their usage, but more and more so it's the secondary reason - there are lots of power efficient displays nowadays that can maintain an always-on screen at relatively low power cost, and smarthome devices generally aren't reliant on battery power.
But the bigger problem is that LCD displays are ugly. They are backlit, and their response to better lighting in the room is to increase its own strength to make itself even more apparent. They are ugly, obnoxious, and annoying in the same way blinking router lights are, but multiplied several times over.
So we keep them off - we can afford to keep them on, but the fact that we keep them off 99% of the time less environmental consciousness but more an acknowledgment that they're visually noisy, distracting, and just kind of don't fit in. When you walk into a room you don't want your attention immediately drawn to this LCD panel on the wall with its pale glow.
E-ink fortunately doesn't suffer from this problem. It's clearly legible, doesn't require backlighting, and more importantly doesn't appear distractingly electronic in everyday use. You can afford to keep an always-on e-ink display, not just because of its low power use, but because it won't be this annoying glow in your peripheral vision always.
About half the time, adjusting my Nest goes like this:
1. Wonder what the thermostat is currently set to.
2. Approach thermostat hoping to trigger the proximity detection.
3. Stand in front of thermostat like idiot for 2.4 seconds to see if proximity sensor activates.
4. Decide proximity sensor won't activate.
5. Turn bezel by smallest possible amount to wake thermostat without changing setting.
6. Overshoot and change setting anyway.
7. Realize original setting was acceptable. Put it back.
Being always on would be nice. You can work around not having it be so (I'm sure the Nest's implementation can be improved) but the ideal would just be to have it always be visible.
Of course, half the time I'm doing this, it's dark enough that I couldn't read an e-ink display....
I think those are much more common in new installations. These are segmented LCD displays. An eInk display would not require an explicit segmentation layout, so new layouts could be programmed on the fly, prototyped very quickly, etc. And might even be a little easier to read, even during daylight hours I find these types of LCD displays difficult to read, especially from off-angles.
No one has funded actually putting a larger display into production, the eink folks show them off periodically to show that they can make them if wanted.
I'm more inclined to do the 6 rows of 7 displays which allow for a day aligned calendar for every month. It wastes a few displays of course, but the displays are easier to get.
ooh, I saw a bin of rebadged e-book readers at the auction liquidation place, that would have been a great use for them. Definitely will keep an eye out for them.
And then looking further I saw this : http://www.wvshare.com/product/4.3inch-e-Paper.htm now there is a hack-a-day worthy project. 42 of those bad boys (just $250 including shipping!) and a frame and clever raspi software with a multiplexing serial I/O circuit and voila, the forever calendar for your wall.
These are specialised industrial suppliers, you can't buy it at a place, you have to request a quote from a supplier[0] and pray you'll get a callback (then try again through a different channel, because they most likely won't reply to somebody who just wants a 32" panel, note how the contact form has a "volume" selection starting at "< 10000 units/year"). Seriously. When you try to get an evaluation/test kit you get a 7-segments display[1].
If you want smaller ones, ebook readers like the Kindle have made those relatively cheap and others have reverse-engineered the drive signals required:
Before shooting this down as expensive can we stick to comparing like with like, the device is intended to have a long life span so educators can build quality teaching resources based on the platform.
It's also pretty great as a screen for your development/hacking Pi when you need to debug issues. I use two 27" monitors on VESA arms and the display is short enough that I can just tuck it underneath. The relative low dot pitch means text is very readable in the terminal without any configuring. I actually really love it (and I didn't expect to when I first saw it...)
Not ideal for tablet use as the back isn't flat (Pi and driver board attached to it). You'd also need a power supply! I don't have any scales to hand but with screen + Pi + driver it feels about as heavy as my iPad Air.
It's great as a mini screen for your Pi though. :-)
There are a ton of Chinese brands that sell rasperryPi compatible touch screens (of very varying quality) at half the price. That being said if I actually needed a touch screen for a rasperryPi I wouldn't hesitate to spend $60 on an official screen and save myself a lot of potential hassle.
The most of cheap "rasperryPi compatible touch screens" use SPI interface (GPIO) which has very limited data rate, only suitable for 320x240 / low FPS graphics without any kind of HW acceleration. HDMI compatible touch screen are typically more expensive than $60.
The official RPi display is great because it uses the onboard display connector (which has been useless this far).
The $60 RPi screen is higher resolution, mounts with four screws (vs this one with three separate PCBs plus the LCD), and has an interface designed to be controlled by the RPi GPIO pins. I don't think saving $17 is worth losing all of that.
These are 7-inch capacitive touch screens, less than $70 from amazon prime, same resolution (800x480), mount with 4 screws, and use HDMI for video and USB for power/touchscreen. The downside is that the touchscreen drivers are binaries. Somebody online made an open source driver that appears to work perfectly, so if you're not using Raspbian or something else their binaries are made for, you might have some driver hacking to do.
Interestingly, the controller on all those displays can actually talk DPI, it's just never pinned out - and there are probably even cheaper DPI-only displays out there, judging from the price of devices that use them. Unfortunately they seem to be really difficult to get hold of.
You are very right with this. I've bought plenty of both. What it comes down to is that if you just want one to make a cool project or experiment with, buy the official one. The extra $ is worth it and then some on saved hassle. If you need to rig up half a dozen, or its something you want to build over and over, it starts to become worth it deal with the hassle of learning the quirks of the chinese parts.
If they were using standard connectors for the display, why would having long life span be useful? I can just use a newer one (better specs) when time comes and can swap with a new one from another vendor if needed be.
I spent this weekend creating a small micro-controller system (freescale uC) with an attached tft lcd+touch, the latter cost me less than a 1/4 of this display and uses standard connections and readily available datasheets.
In their own words, the raspberry pi display uses interface signals that most other manufactures shunned due to several issues (EMI,etc) and their connector is hard to find on any other sbc currently in the market.
The display is huge compared to the raspberry pi board, what is the use case, considering the low resolution? Why go through all the trouble of using the connector on the pi instead of its other standard hdmi port ?
I think we're confusing "lifetime" in the reliability sense with "lifetime" meaning how long this particular LCD part will be available from the supplier.
I've been working with these types of LCDs for a long time, and when it comes to these WVGA/SVGA 7" units you can get a ton from Asia at great prices but the factory will literally disappear in a few months. The LCD fabs constantly get bought/sold/churned as the major players like Sharp and LG shed off their last-generation fabs and the other smaller fish scoop them up.
Ideally a group like RPi will want to have the same part available for a long time so that they don't need to create new revisions of the interface cables, power supplies, or software drivers to handle the change. It's nearly impossible to autodetect one 7" LCD from another via software so you need to configure it entirely in a bootloader or kernel configuration line. And then that becomes a massive support issue.
Even though most people will use RPi as headless server or connect it to TV, it is good to have a decent "default" display option that works out of the box. The display looks very elegant in provided photos. It should be a great choice for hobby projects.
I'm not sure "most people will use it with X" makes sense as way of thinking about RPi. It's for tinkering and learning and plugging into your projects. As a starting point, 7” Touchscreen seems like a pretty good way to go for versatility.
I like how this lets people just tape a RPi to a screen to make a tablet. Nice first step for young tinkerers. Step 2, lasers.
I use RPis for bespoke installations for clients. One of the problems has been offering an easy way to make adjustments to the apps the RPi is running without a keyboard/mouse/monitor setup or having to SSH in. This is a great way to offer the ability to make changes. Looking forward to trying one out
For my headless Pis, I attach a BT TTL serial port to the rx/tx pins. They run about $15 or less. This lets me pair to the serial port from my laptop, watch it boot the kernel, login and do whatever cli commands I need. I can also use BlueTerm on my android phone to do the same thing (though make sure you get the hacker keyboard if you go this route).
Promotional quiz games where there are two buttons wired up to the RPi which is serving a web-based game, tweet-powered devices (RPi listens to the Twitter stream and activates a power switch when a certain hashtag is used) etc.
I once created password less WiFi hot spot and served weird texts instead of websites. For example if someone went to google.com, a person received "No Google for you today" message instead...
A lot of people are comparing this to just buying an Android tablet and saying it doesn't make sense. You're probably right :)! The Pi has so many more use cases outside of just typical Android use however that this product does make sense for.
My example is that you can rig the Pi to work with your own Receiver as a wifi flac player with this device: https://www.hifiberry.com/. You have to control it over wi-fi, but having a console that I can go up to and interact with will be awesome. Also will be great for people like my Father-in-law, I wanted to build him a device that has all 60s/70s/80s rock for christmas, but I didn't want to have to set-up a wifi router and get a device for him just to control it.
The trouble with recycling parts like that, is that the more mass-produced something is, the more likely to have idiosyncratic connectors and signal formats that mere mortals can't find out how to hook up and program.
Actually, unless you're deliberately looking at the companies who like to invent their own proprietariness, they're more likely to have standard connectors and protocols; consider laptop displays, for example. The ones designing the laptop want to be able to use displays from different manufacturers, and the display manufacturers want their products usable in as many laptops as possible. This leads to standards, both normative and de-facto, and there being only a small number of variations.
I'm not as familiar with mobile device displays but given that standards like MIPI DSI exist, I suspect it's a similar situation.
I see.
So the PCB part does this "adapter" work to make the screen "talk" with the R-Pi, right? Screens for mobile devices, don't they have any standards someone can use to build a similar PCB?
"When looking for a device, we needed to look for what are termed ‘Industrial’ LCD displays. These tend to have better-quality metrics and guaranteed availability"
This explains the $60 price, which I personally find to be great value.
Look up the Innolux AT070TN90/92. It's been in production since 2009 and will be until at least 2023. There are people using it with the RPi, and no, it does not cost anywhere near $60.
Quite cheap but to compare, you also need to add the cost of the touchscreen and the board to drive all this, and your result would probably not pass EMC requirements (for a hobby project, you wouldn't care).
Still cheaper to source these parts yourself, you are right, but the official screen isn't meant to be the cheapest, it's meant to be cheap enough to be affordable, to workout-of-the-box, to be reliable, of good quality and have some guaranteed availability.
You can work with other screens and build and write your own interface if that's part of the pleasure you get from hacking on these devices, but for people who have other goals, being able to get an affordable screen that just works allows them to spend their time on other parts of their project.
They provide a model with a touchscreen for an extra $7. However, you can't work with other screens and build and write your own interface because on the Pi, all the display modesetting is handled by the closed-source binary blob running on the undocumented parts of the GPU, and the RPi Foundation won't enable DSI connector support in it for anything other than the official screen they sell. It's a fairly common hobbyist thing to do on more open boards, it's just not possible on the Pi.
A budget android tablet can cost slightly less than $60, and comes with the touch display, the processing power ( more than a raspberry's), the memory (more than a raspberry's), embedded wifi, bluetooth, sometimes GPS, two cameras.
Don't get me wrong, i love the pi, and the problem is not the price tag of the pi + accessories but the unsustainable pricetag of entry level android tablets.
Plus the pi comes with its GPIO, raspbian, and is more suited than an aout of the box android tablet for most projects.
Well for most of us €30 (the cheapest I found here in france) for a tablet makes it a disposable device (this is different in developping countries).
There is a recycling tax here that is as low as ~5 cts for a tablet or a comparable electronic device. The burden of recycling is on the developping countries we send our used gear to. Not to mention the use of raw materials.
Can't say that this does not apply to higher end device that tend to be unrepairable but their lifespan is longer.
A Retina iPad display is cheaper, takes DisplayPort in, and is significantly higher resolution. I don't think anybody has reverse engineered the digitizer but I doubt that matters for a lot of applications.
For a personal project, I would absolutely go your route. But as a business, the foundation has to hedge against non-availability of the panel, because switching to another panel would go along with additional development costs.
Sure, but this is surely pitched to people building things rather than companies. I don't think anybody in their right mind would integrate something so expensive into their product, and a Raspberry Pi while cute is really not useful inside something commercial.
I was thinking the exact same thing. In 2006, this company called BugLabs made this device called a Bugbox[1] which was a modular CPU that you could attach components to. It had GPS, basic touch screen, Wi-Fi and a few other features plus it was programable. I started to write an app for it and in Jan 2007, the iPhone was announced. It had more hardware sensor components at a similar price with significantly higher build quality. I loved the Bugbox concept, but it wasn't pragmatic and that's the sense I'm feeling from this.
Does anyone know why Android devices don't video inputs, or even aftermarket way of getting video in cheaply? They would be super useful as small screens for various applications
When I used to work on those devices, the arguments went something like this:
"Adding video input means we need an additional connector on the device, which the Industrial Design guys hate."
"We could multiplex the signal, but then we need some kind of proprietary connector and the additional cost of that circuitry."
"How many customers want to record video from an external source anyway? It's not worth adding $5 to the MSRP for a feature that isn't going to get used."
My guess is that the entire display circuit is tightly coupled with the SoC, and it would cost a bit more to get something into the circuit that accepted, decoded, and displayed a digital signal on top of the current SoC talking directly to thee display.
Thank you very much! We work hard to provide great customer service and stock the best range of Maker and Raspberry Pi products available in the UK/EU.
Just ordered, what a great shopping experience. Huge plus when I saw the Bitpay integration. Relatively inexpensive shipping to the States. Will definitely keep you in mind for the future.
Glad to have you aboard the Good Ship Pimoroni. We recently relaunched our shop and a big part of that was revising shipping rates to make our offering more attractive worldwide. I'm really glad it worked out for you!
We don't have a physical store but if you arrange with us by e-mail in advance you'd be welcome to come pick up an order. Hit me up on jon@pimoroni.com
It will view most websites just fine considering the amount of mobile traffic out there. It won't be super sharp like pixel doubled smartphones, but it will render them just fine.
How about we use the touch display with the raspberry Pi and a windup mechanism to charge an added battery pack then start a charity to distribute these to developing countries.
Looks like there are a few ICs on the adapter board. I wonder if that means other screens could be hacked in and if the DSI connection is going to be a binary blob.
> One thing I miss in default raspberry pi is battery support. Today I just use the filesystem in RO.
It would be easy to put a battery + charging circuit in front. If you still want to power it through USB you could get increased efficiency by bypassing the DC-DC converter (just cut the trace from it and solder the output from your own PSU there).
If the RPi itself had a charging circuit, they would have to decide for the end-users what kind of battery chemistry, charge rate etc. it should support, and they would not be able to please everyone as these things are used for so many different purposes.
It should work with Windows 10 IoT but does require a quite recent version of the firmware to support it. I'm not sure what the current firmware version is
There are a lot of use cases where you want a machine to be running constantly but don't want the screen to be on constantly.
I can't put one of these in my bedroom unless I can easily and instantly switch the monitor on/off.
[edit] I have an existing PiTFT which also doesn't have a power button. I can turn it on/off programatically, but still a power button would be highly useful.
[edit2] Can this screen be turned off programatically without turning the Pi off at the same time, if the screen is powering the Pi?
[edit3] Hang on. Is it even possible to turn the screen on/off programatically from the Pi if the Pi is powered separately?
I've just confirmed with the foundation that the display will be controllable from software but the current Raspbian image doesn't have it yet - this will allow you to turn off the screen and backlight while leaving the Pi running.
Whilst that would work, it's less than ideal. I'd need to power the monitor and Pi separately at that point. If turning the screen off prevents that pass-through power capability, that is a poorly thought out design.
Your best bet is probably going for a Banana Pi board - they come with SATA to attach SSDs to. It's a close contender to the RaPi, but the software support isn't 100% though.
I've got one myself as an attempted NAS device. It worked 'ok', wouldn't advise it as full-time thing though, these things don't provide the AC nor CPU power to perform such tasks.
This is disappointing. This screen is 7" at 800x480, so its sharpness is about 133 PPI (pixels per inch).
For comparison, my original Android G1 (several years ago) was 180 PPI, and it looked shit.
This is $60 plus taxes and shipping. I just found a 7-inch tablet for £28 ($43.10) on Amazon (plus a camera and RAM and stuff). Including taxes and shipping. Why is this so expensive?
It's expensive because the economies of building electronics are weird. As they said in their announcement, a big challenge is getting a display which is guaranteed to be available for a long period of time. That means not buying the bleeding edge, but something designed for lower volume markets. If you want something that's available for a long time, the likelihood is that it's already been available for a while. If you want the bleeding edge, it means they'll stop making it and start making the new bleeding edge in 18 months (with subtly different connectors, sizes, drivers etc)
The vast majority of the cost of this display is driven by factors unrelated to the technology level. Labour to build it, logistical costs, EMC qualification etc. That's the case for almost all cheap consumer electronics.
>I just found a 7-inch tablet for £28 ($43.10) on Amazon (plus a camera and RAM and stuff). Including taxes and shipping. Why is this so expensive?
Sales volumes determine economies of scale, it is likely the £28 tablet was made in quantities much larger than the pi screen (I'd guess at least 2 orders of magnitude).
Also, not all screens are equal. It's not just resolution: there's colour reproduction, viewing angles, brightness, contrast and response time. On the non-technical side mentioned on the blog, they mentioned they wanted a manufacture who would make the panel for a long time. I would bet a dollar that Pi screen beats the £28-tablet display on all the above parameters
Which seem to be all crap for this panel anyway – 70° viewing angles implies it's the cheapest TN panel they could find.
For a non-profit(!) like the RPi foundation that can neither guarantee sales nor buy them in advance in massive bulks, availability is the only factor that really matters.
Indeed. They mention it's an "industrial" panel, which in my experience seems to mean "low contrast and brightness,
narrow viewing angle, but wider temperature range[1]". The Innolux panel I mentioned in another comment here (https://news.ycombinator.com/item?id=10185433 ) is an example of this type.
But the £28 one comes with RAM, CPU, wifi sensor, camera, and it's two thirds of the price. Even if the other screen specs (brightness, viewing angle etc) are really terrible on the £28-tablet and incredible on the raspberry pi display, this still seems very expensive, and the difference seems like more than just an economy of scale thing.
I bought one of those £28 Android tablets from Amazon to experiment with for work a while back. The thing was utterly unusable - it turns out the hardware you can get for £28 is just barely enough to boot to the Android launcher. Don't even think about doing anything crazy like opening a web browser, or reading an eBook on it.
You had poor luck; I've used one of those for a couple of months and all my heavier apps (Firefox, Aldiko and Komik Reader) worked fine, albeit the memory pressure was felt (background apps were immediately dropped when opening a new one). You can easily get a tablet with 1GB of RAM for less than £28, which is equivalent to my Nexus 7 (2012), which I still use every day.
It's a Pi accessory, and one where the RPi Foundation have a lock on the market - because all display output configuration goes through the binary GPU blob, they control what displays you can use with your Pi.
This is just like when they released camera module. Instead of opening MIPI interface to the developers they shipped binary blob locked to one particular camera module from one vendor, because fuck you thats why (well, actually one of rpi/broadcom engineers said something like "people wouldnt be able to figure out how to color correct/debayer because its trade secret of camera module manufacturers, so why bother")