Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I wrote a display server, a desktop environment, and a debugging/reversing tool (arcan-fe.com)
185 points by acrazyloglad on May 27, 2016 | hide | past | favorite | 46 comments


Is there a short summary of the features/value prop for each? The slides/video are a bit long and detailed.

A simple overview and link to Github makes a good pitch.

Curious how it compares to [1] bspwm and [2] sxchd? Which offers a minimal tiling window manager and daemon controlling keyboard shortcuts. Each has a small C codebase that's simple and easy to understand, unlike most other tiling WM.

The config is excellent too and integrates well with [3] xmobar for a very clean and lightweight desktop, well suited for ArchLinux.

[1] https://github.com/baskerville/bspwm

[2] https://github.com/baskerville/sxhkd

[3] https://github.com/jaor/xmobar


I assume that the video you are referring to is the durden demonstration playlist [1] and not the general project presentation?

For durden alone, it's a bit hard to invite the comparison on a level other than feature, since I have to cover the bases that xrandr, xset and a whole lot of other x* tools provide - but current and planned features is tracked in the checklist if you scroll down in the readme at [2]

[1] https://www.youtube.com/playlist?list=PLGqpKIeZOSp6quf6CmMOr... [2] https://github.com/letoram/durden


Durden confuses me a bit. Is arcan completely standalone, does it replace X? The screenshots for durden show normal applications in Windows. Does that mean it is possible to install linux, install arcan and durden, and then run normal applications without needing to install X (I assume of course only those that do not rely direcly or indirectly on Xlib or xcb)? How does it compare to Wayland?

I guess I'm missing the positioning inside the existing linux-ecosystem.

Feedback: Having to whitelist applications in a database before being able to launch it sounds like a pain in the ass, and I predict that would be a feature that will absolutely prevent adoption.


The 'you can install linux, arcan and durden and then run normal applications' assertion is correct. That's how I am using it. The screenshots were taken on OSX because that was the only handy thing I could do the video recording on at the time. It's possible to 'test it out' with X/libsdl as a backend.

The actual setup is a bit hairy on most distributions because everything brings in X as a dependency, but I have both Alpine Linux and Ubuntu16.04 running it here without X (or dbus or pulseaudio or systemd or ...) Here [1] is some proof and [2] are my rough notes on how I did that.

The whitelist part is entirely optional, it's only there for really security paranoid stuff later on (sandboxing policies and chain-of-trust starting). ARCAN_CONNPATH=durden /some/bin style 'launching' is entirely possible.

The second to last slide goes into the relation with Wayland, but there will be support for connecting applications that speak wayland, and other protocols for that matter.

[1] http://cubeupload.com/im/lmBeMk.png [2] https://github.com/letoram/arcan/wiki/linux-egl


Sounds incredible. I'll put it on my to-blog list, if I get it to run at least.


First time I hear about sxhkd. Deepest internal duh I had this year. Another 'why didnt I make this before'. Thanks.


This is fantastic- it's not going to replace X11 but more importantly it wasn't designed to. I would love to see more projects like this. So much great stuff doesn't get written because people convince themselves it's not worth the trouble- that if it isn't a drop-in replacement that's a superset of the thing it aims to replace, then it's not worth it. But really that's just how you end up getting stuck with things like X11 for 32 years. Even if this doesn't take over the world, it's already a successful technical investigation and proof-of-concept.


thanks :-) you're right in that the only world it'll likely take over is my own, but I'm quite fine with that - the only device I rely on that it's not the UI for now is my smartphone, but the only thing missing for that is KMS/GBM support.

Amusingly enough, the reason we've been stuck with X.org/Xfree86 all these years is the hard coupling between the graphics drivers and the server implementation - and while different, of course, it is very much like that in Android land too, lower level GPU interfaces act as a very good control and lockin mechanism.


"In the sprit and dedication of the ever so relevant +Fravia, I might have hidden some other fun stuff out there in the world for thise that sill remember how what it means to search."

Hidden in white text at the end there. Sounds fun!


++ for picking out the +Fravia reference, and a middle finger to Google Plus for using the same thing.


HCU will rise again!


You need 10% of the code for 90% of the features.

But when you need to add those last 10% of features back do you realize how the crufty bloated code you tried to avoid became that way.


I feel like this happens pretty often with rewrites. Up to the very end, you feel like you're doing great. You've written 90% of the original functionality in 10% of the code. Then you try to fill in the last 10%, and end up back where you started (with slightly better/more logical code).


I feel like the difference between a successful ground-up rewrite and a failed one comes from defining your scope correctly from the outset.

You run into the overflowingly crufty "last 90%" work mainly because your new thing still has to plug into an ecosystem of other components that are, themselves, old and crufty. If, however, you've defined your rewrite-project boundary to contain the entire ecosystem of those components, such that you get to redraw all the component boundaries and protocols with fewer redundancies and smaller surfaces, then you really can keep your 10% code—at the cost of having to write that 10% of the code for at least ~9x the number of codebases you were intending to.

This is what Plan9 managed to do: draw the correct boundary to simplify Unix, by putting all of Unix in scope for rethinking, eschewing the last 10% of the functionality because the other, redefined components now support the same thing in simpler ways. (See, for example, the way the Acme editor does menu commands.)

It's also—whatever you think of it—what systemd is doing, sort of consuming components of the Linux userland and leaving cleaner service boundaries in its wake. The sheer scope of that effort can make the project look bloated—but people forget to compare the codebase with the combined codebases of all the projects it took to have equivalent functionality before.


Well, if you're rewriting something that is actually worth rewriting, you can end up with way nicer code, but yeah it's still very hard to avoid some awkward crufty bits in that last 10%.

The tricky part is that "actually worth rewriting" is a higher bar than you would think. And even when you clear that, finding the sweet spot between "you ain't gonna need it" and "you ain't gonna need it until the rewrite is 95% done" isn't easy.


At least you have to give X-Windows credit for preemptively adding lots of crufty bloated code and protocol supporting the 10% of the features you'll never need from day 1 through 10497.

https://tronche.com/gui/x/xlib/utilities/XRotateBuffers.html

Mechanism, not policy, huh?


I'm really excited to see folks still doing good fundamental work like this. The world absolutely needs outside-the-box thinking about HCI and software architecture.


perhaps needlessly to say, but I agree wholeheartedly ;-)

Personally I hope that when people get past the new-old 'sandboxing will solve security' trend going on, there'll be more people looking at the HCI connection instead of just saying "well its a user problem".


Extremely impressive, props!

Some parts of its novely actually give me sort of a TempleOS feel, I suppose because it feels/looks so "different". I really loved watching the Senseye video (https://www.youtube.com/watch?time_continue=294&v=WBsv9IJpkD...), though it's something that I suppose could be made in other environments as well. The translators reminded me of edlib by Neil Brown: http://blog.neil.brown.name/2015/07/edlib-because-one-more-e... (http://blog.neil.brown.name/), there's a cool video of edlib somewhere as well. Although the concept of separate views on the same underlying data is not novel, it's a shame such things aren't widely available (or in use) that I know of.

Despite being a fan of statically typed languages, of all scripting languages I like Lua the best. The language is lean but powerful and LuaJIT is a coders' dream. I don't understand whether one would be forced to create all "apps" (appls?) in Lua.


Thanks!

Arcan is definitely written to be used without the Lua parts as well, and on the project timeline the final pieces of the puzzle is libifying the engine. The reason why it is not like that right now is just that the internal engine interfaces are not in a stage where I would 'commit' (and document and test) an interface that others might end up relying on.

I also much prefer statically typed languages, I consider Lua somewhat "the best among the worst". One nice thing with its VM integration API is that it lends itself to be swapped for another VM without too much work rediscovering a type model. If I would have the luxury of more time, binding the engine to Haskell would be high on the list.


Senseye really looks like "Cantor.Dust" (https://www.youtube.com/watch?v=4bM3Gut1hIk). I wonder if c.d. has been an inspiration for it.


from https://github.com/letoram/senseye/wiki

"Background-wise, it is losely mentioned in a rather confusing article I wrote a few years back (search around for Retooling and Securing Systemic Debugging) as part of some thesis work, and later influenced on ideas presented in Voyage of the reverser by Sergey Bratus and Greg Conti, and on Christopher Domas work on Cantor Dust which also takes inspiration from Aldo Cortesis visualization works."


Here's the github link: https://github.com/letoram/arcan


Excited to see Lua here, I feel it complements C very well for desktop scripting.

I was just browsing yesterday for some sci-fi desktop interfaces [0, 1].

[0] http://nnkd.org/dexui/

[1] http://sciencefictioninterfaces.tumblr.com


like the dexui look and feel, hmm, that would be about an all nighter with my all-rush mixtape and a two litre bottle of shasta..


Okay, I'll bite... what would this all-rush mixtape look (sound) like? I'm curious. :P

And I would describe Arcan et. al. as a whole as curiously interesting. The only thing I know of like it is PicoGUI, a 2000-era display manager that could go as low as 1bpp on a ~20MHz 68k running uClinux (specifically this: https://www.flickr.com/photos/micahdowty/albums/721576270325...), while also being able to handle 24bpp and even some bits of OpenGL. It was abandoned in 2003, and the author did a writeup of the worst of the design fails (some were... notable) at http://picogui.org/papers/ghost-of-picogui-past.html.

While I would firmly categorize PGUI as a subset of your work, I thought I'd mention it because I think it tried to reach similar goals to what you've done, and you may potentially find the source code vaguely interesting to scavenge through on a rainy day (although it might take eg a Debian 4 VM to build cleanly without fuss). http://svn.navi.cx/picogui/trunk/ https://sourceforge.net/projects/pgui/files/picogui/ https://sourceforge.net/projects/pgui/files/OldFiles/ (hidden directory I found?)

I have no idea if this will be interesting, I figured I'd mention it just in case.


Futurama reference ;) https://www.youtube.com/watch?v=VmCqn-DNSA0 though apparently popular enough that some people have made all-rush playlists ( https://www.youtube.com/watch?v=JsKBIBJj-4M&list=PL8BF75E7F0... )

On-topic - definitely a cool project and one that I haven't heard about actually, added to my queue of weekly source-code readings.

I'd also consider DirectFB similar, and another project that I just have in my vaguest of memories (think they used a project-name like Cairo that got google-CVed out of existence from the more recent one) from the 2001-02ish era. They tried to attack X back then, but fell victims to the driver situation (but they had arbitrarily rotated windows!).

One of my stronger personal influences is BeOS though - Compare https://www.youtube.com/watch?v=BsVydyC8ZGQ to https://www.youtube.com/watch?v=3O40cPUqLbU&feature=youtu.be... :)


Ahh... I don't watch any^H^H^Henough TV. :P

I've been meaning to poke around PicoGUI myself - I personally love stuff that's tiny and efficient, always looking out for things like that. (I just found a bunch of old versions of Contiki, the ones with the GUI stack; the non-broken ones were fun to play with: http://hitmen.c02.at/html/tools_contiki.html) Very cool to hear I recommended something relevant! ^^

I've heard of DirectFB, but my understanding is that it just tames framebuffers, as opposed to dealing with everything below the toolkit level.

I don't recall anything called Cairo myself, but as for attacking X I do vaguely recall a company that made a closed-source alternate Linux display stack+desktop environment; it was very rudimentary and went nowhere because of that.

I like BeOS too. I keep meaning to install it (and OS/2... and QNX... and...), just preferably on real hardware. I have a bunch of old stuff hanging around here that I hope to use once I have a little file server and I can free up my dozens of old HDDs :P

And your video (I watched the other one in 2008 :D still remember it) was really awesome, and led to git cloning and compila--"wait it's done already?! Nice."

Now my main request is, please update the documentation on GitHub (particularly the quickstart instructions) so we all aren't stuck with just welcome.lua (which I initially thought was a builtin options screen then double-checked to see if I'd specified the args wrong, lol). I (and probably everyone else) want(s) to play with the stuff you're demoing in your videos!

Another tiny thing I'd mention about the video (it was the first thing I noticed actually) was that it would be a) awesome to watch and b) a great demo of your engine's graphics pipeline performance, if you have the media player update its graph at like 60fps. Or at least 24fps. Just a thought.

For my favorite reference of what fast VU looks like, I recommend rezound (http://rezound.sf.net/) - run it once, then edit ~/.rezound/registry.dat and set meterUpdateTime (in Meters {}) to something like 4 (it's a delay in microseconds I think, 4 is nice and doesn't flood X with updates too fast on my machine). The weird knob thing to the right of the VU (very bottom-right) adjusts the frequency response.

Another possible source of fast VU updates is the Linux port of Open Cubic Player (http://stian.cubic.org/project-ocp.php) - alt+c, set framerate to 60 or 120fps, set font to 4x4 (after loading music :D), and the result there looks nice, too - although it responds best (for me) with a non-fullscreen window. (This one's a bit of a project to learn all the shortcuts for, IMO.)


I actually force myself to do at least some of the development on (no cross-compilation) a raspberry pi for the reason of getting a snappy build-time.

I had even forgot there were instructions like that, outside the technical bits I'm probably the worst person to write helpful guides as the workflow etc. is just so strongly internalized that most of it 'feels' obvious.

Now don't look at the code for generating the FFT (or anything else in the _decode frameserver for that matter), the reason it doesn't update smoother is just how much I don't get along with "lib"vlc, but I added it more as a novelty (spoiler, FFT precision is murdered and packed into a texture and the rest is a shader, it doesn't even synch well).


Are you me? That's my ideal approach! Except I'm not so crazy to consider running the build on the slow box (:D), rather my approach would be to have 1Gbps+ between the two machines, build on the fast machine, and run (perhaps directly from NFS?) on the slow machine. I was thinking of editing on the slow machine too (so it's the machine you use) but I don't think going that far is actually necessary. Now, as to how exactly I would implement this idea under DOS for my 486 is another story entirely...

I know what you mean by the obviousness thing. I however am sitting here with no idea how even to desktop with Arcan. I wouldn't mind finding out though.

And I was wondering if it was updating the FFT so slowly because of... yeah, something like that. I see. I have to acknowledge and agree that it does indeed not sync well. Fixing this sounds like a large pile of boringness; I can see that the other display components update pretty quickly, at least. (But now I'm wondering, is the video of a VNC server? I thought it was SDL. How are you updating the screen so quickly?!)


If you poke me on IRC (letoram, #arcan @freenode) I can probably help you out in using the thing.

I think the hardest I've pushed the shared memory interface is basic computer vision (filtering, 9-segment display OCR, tracking an x-y plotter and some glowing devices) on 8-bit mono 2x1000fps 320x180-or-so cameras. Even then most time was spent waiting for synchronization bottlenecks because of OpenGL2.1 limitations.

you thinking about https://youtu.be/bQlHnW2qCh0?t=1m28s ? so the round-trip time there is gpu-composite -> readback -> vnc-server -> vnc->client -> back to gpu.


This is so annoying - I keep finding reasons to go on IRC, but am still stuck on what IRC client to use. ("No, my web browser feels weird." "irssi doesn't have a multiline text box." "weechat isn't configurable like irssi is." "I don't want to use GTK or Qt." Lost cause: check. Protip, don't drown yourself in chat client ideas for 4 years, you'll poison yourself to everything out there :X) - but the mention of IRC is duly noted. :D

I had a major derp, however: I forgot durdan and arcan aren't, err, the same thing... let's just say I just installed durdan, and successfully played around for a bit. It's a bit slow on my frankensystem (GPU is older than motherboard+CPU... don't ask >.>) but still very cool.

And wow, that's a pretty awesome usage of the SHMIF, cool. I wonder if porting Arcan to Vulkan would produce interesting results...

And I meant https://youtu.be/3O40cPUqLbU?t=279 - in particular the part I seeked to, where you play and record video and blit it onto a 3D surface so on and so forth - the VU is noticeably slower than everything else. I just thought it would be cool if the video was like "look: EVERYTHING is updating at 60fps!" - but it's cool. :P

About the video you linked, that's pretty incredible too... wow.


I have high hopes for the shmif- port to vulkan, as far as I can tell, there's no good way to flag pinned memory as shared and build the shmif around that, but if it was possible it would be predictable synch, controlled colorspace conversion and ... oh well, QEmu integration first :-)

You might be able to reduce the fillrate cost by trying config/system/simple displaymode.. it removes a lot of features but you save at least a full extra renderpass.


Wow, shared pinned memory sounds absolutely awesome. Please tell me NVIDIA doesn't have to alter their binary driver to get this working... please. :P

I'm not sure if you need to shout at NVIDIA, Linux or Vulkan to make this possible, but there are so many awesome things people could do if this were possible...

I would totally recommend you shout at all the relevant mailing lists - even NVIDIA's, if it comes to that :P to get this supported.

And how are you managing qemu integration? What do you mean by that? o.o

I tried the simple displaymode, which seems to be a tad faster, but it's still glitchy - the issue specifically is that opening a fullscreen terminal (at 1600x1200) basically makes my mouse a slideshow, and there's noticeable typing lag too. Resizing the terminal down makes it go away: this is proportional to terminal size.

And as an aside, -w 1600 -h 1200 makes the mouse cursor go halfsize, -w <= 1599 and -h <= 1199 makes the cursor normal size. Curious.


QEmu integration just started working, https://github.com/letoram/qemu it's far from complete enough that I'll try to upstream patches. -display arcan.

Your GPU is most likely fill-rate limited. Are you running this entirely natively (EGL/GBM/KMS stack) or using SDL through X? In the latter case there'll be so many fullscreen sized buffer copies that your GPU cries. There's more special tricks I can do to get the fullscreen case to go faster, and it's on the near todo for Durden anyhow.

Also, the terminal emulator only really supports truetype fonts (there's a built-in fallback that is quick, but it's awful and only 7bit ascii..), which is damn expensive to render.


Opens repository "Official QEMU mirror." "I see." So basically... you can display QEMU inside arcan. That's really neat.

My GPU is everything-limited :P I just tried arcan on my integrated video, no issues there. Maybe an almost imperceptible bit of slowdown, but just that, almost imperceptible.

I'm using an ancient ATI X1300-series [1002:7183] fanless GPU I yanked from an oldish workstation so I can have 3 screens in a pinch. It runs two displays; my i3's HD 2000 runs the 3rd. (And I can't move Chrome between :0.0 and :0.1, which X won the fight over me having. Yey :3)

As for drivers, I'm using the radeon driver w/ KMS (switching between X and tty is instantaneous); the only issue is that I occasionally see framedrops due to driver bugs, but that's the only problem I have; the driver is very fast. (But it should be, the card's a decade old. :P)

I'm not sure if I'm running truly fullscreen, incidentally - I can see vestiges of i3 (my windowmanager) in the form of a 1px border underneath the bottom of the arcan window.

...So I just tried -f... and you don't support screens with different resolutions. My center display is 1600x1200, my left is 1280x1024. I get fullscreen on the left, and a cute arcan window (that I can't move my mouse into) on my center display. (The left and middle are on the ATI card.) With fullscreen it's still slowish (which I understand is to be expected at this point).

AFAIK I'm using SDL; I tried building with X11 but the build balked, so I conceded to the instructions on github :P

About truetype and terminal emulators, I had an idea a while ago: glyph caching. (((8x15=size)x3=24bpp)x256=ASCII)=92160 bytes. That's remarkably manageable. Unicode blows the 256 out of the water, but how much of Unicode will the average terminal session see? Certainly not all of it, so the cache eviction algorithm won't need to be particularly aggressive or smart.

The only really major catch is that #222222 will not antialias the same as #FFFFFF, and trying to make one look like the other will look either really dim or really pukey, so the caching system would also need to cache each color of each character that it sees. However, this is not actually totally the end of the world, since glyph tables don't actually take up all that much space, as I've just noted. Very mathematically inelegant, but quite possibly worth it in practice.

That's the point I would be stopping at; you may be interested in going full crazy with something like https://news.ycombinator.com/item?id=11440599 (this looks like a lot of fun to play with).

What's the builtin fallback font, out of curiosity?

Also, I want to clarify and emphasize that, running the terminal in arcan on my old ATI card, the size of the terminal window is directly proportionate to the input+video lag. As I resize the terminal smaller (in floating mode) it speeds up, as I resize it bigger it sl-ow-s ddoowwnn and gets stuttery and glitchy.

Lastly, I just discovered an extremely curious phenomenon I thought I'd mention. I wanted to screencap the terminal cursor to get the font size for the calculation above. After learning about "mouse lock > no" (good riddance :D) so I could take the screenshot, I soon found that the input lag I'd experienced was actually affecting my entire X session. Moving the arcan window offscreen seems to alleviate it; moving it back onscreen and eg running htop (inside a fullscreen terminal >:D) bogs everything down so badly that typing into Chrome (running on the i3 GPU, on :0.1) is very very very noticeably slow, and resizing xterm is... well I can sit there for 10-20 seconds watching it repaint. The weird thing: my CPU was near 0% utilization. This is either a bottleneck in my GPU, X or both (or X blocking on GPU bottlenecks).

I'm not highlighting any of this to make arcan look bad; I personally find it really interesting to throw code at suboptimal hardware and seeing what, if anything, can be done to speed it up - because if code runs well on worst-case scenario hardware, it'll fly on average kit. So I thought I'd mention the above in case it's interesting. If you have any ideas for benchmarks I can run or timing information I can collect I'm fine with supplying that. ^^


seems we've hit some magical > reply depth limit. I'll get back to you via the gmail account in your "about".


That works fine - I've hit this before, I think that it takes a minute or two for the reply button to show up. I could be wrong though.

Clicking the "n minutes ago" thing opens a working reply form for me if I can't see the reply link, I'm not sure if it works in this case.

Either reply method works. :D


First of all, props to you. It's cruelly insane ( I left the typo :p).

If you mention your project for persons with disabilities, couldn't you get funds or something?

Just saying this because on the last slide you mentioned lack of resources, time and motivation.. :)

PS. Any reason for 2 accounts? Acrazyloglad for posting and crazyloglad for comments


Thanks (+for the typo), I have a few options for funds and for a while I developed an appl for computer vision stuff that supported me. Resources here is more in the 'human resources' category, packaging, configuration management, release management etc. I make enough with my dayjob and I like developing this in the lower- pressure- spare- time spectrum. I have some other hobbies that leave me at risk for serious injury, so it might just happen "naturally".

The posting account was simply because I throw away VMs with browsers immediately after use and I chose a hard to remember password without an email :)


> I have some other hobbies that leave me at risk for serious injury

I'm very curious what you meant by this, if you can elaborate. The first thing I thought of was skydiving, incidentally. :D

Also, just to clarify, you browse exclusively inside VMs that you then immediately destroy? That's neat, although it sounds like a lot of work.


And it was skydiving, it will probably be changed to paragliding then paraplegic ;D

Not that much work, I got a victim machine that is just some scripts, qemu and VNC. It boils down to a keypress to get a new instance activated and when the VNC connection terminates the image is killed. I still place too much trust in the browser..


Oh, that's awesome :D I really want to do that one day... but no going in for you, mkay?! Thou shalt stay in one piece, particularly thy spine.

Also, wow, that's really neat, having everything Just Work(TM) like that. If you use this setup on a regular basis (and it sounds like you do), I can't help but suggest ditching QEMU's VNC server and using TigerVNC (from inside the guest), which automatically resizes the remote virtual display (assuming the guest VM is Linux). Other options include going full crazy and setting up Xpra, a rootless VNC alternative that's a bit heavyweight, or the potentially interesting idea of "DISPLAY=host:0 chrome &" inside the guest (of course with lots of intricate fussing with security).

I can understand overtrusting the browser; sadly thanks to the W3C (which is now basically run by a bunch of corporations) and their hundreds of thousand-page specifications, browsers are impossible to audit. Besides Dillo (LOL) and Cargo (bookmarks to come back to in 1 year), NetSurf seems very interesting, and builds pretty quickly (after you figure out the build process :D:D) too.


For some work (not browsing though) I'll go with a Qemu backend that connects to Arcan directly. In fact, I just got that to boot a win10 VM with graphics, mouse and keyboard..

Quick glance at netsurf makes it seem that it would not be that many hours of work to have it use arcan as A/V/I backend, hmm..

After the full-on derp that is WebRTC and WebGL, WebUSB will be the next round of "fun".


I just tried that. It works quite nicely, very cool.

And NetSurf is designed to be able to output to a framebuffer (with a rudimentary builtin toolkit), yep. (AFAIK you can get it to build the framebuffer mode on top of straight Xlib, for testing that mode on Linux.) NetSurf doesn't do audio yet (no <video> or <audio> tag support for starters) but yeah, an arcan/durdan-based UI would be pretty cool! :P

WebGL wasn't a complete failure, it gave us http://acko.net/. WebRTC is fun though - I remember someone mentioning (it might've been on here) how they asked a WebRTC implementer why they specified arbitrary TCP/IP communication channels, and the person responded that you might want to use it to talk to your toaster. That functionality lets us have https://webtorrent.io/ though, so...

The potential worst of WebUSB: cloud-based hardware capture devices, where all your oscilloscope's processing happens in the cloud, and you'd better pay your subscription on time.

The potential best of WebUSB: crowdfunded/distributed USB protocol debugging and live sharing - plug in your device (possibly inline with a software-controllable on/off switch, for cold resets), point a webcam at said device, maybe attach probes to some test points, walk away, and it's been reverse-engineered by tomorrow morning, possibly for a fee, possibly for free. This would cover at least 50% of reverse-engineering. I'm not sure how viable it would be through the browser, though...

I say let the slightly clinically insane specifications guys go on reinventing all the wheels. They leave a wake of things that open and shut in their path that are just capable enough to actually be useful. \o/


I love this! Just digging through the documentation, excited to play around with it :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: