Detecting edges of images at the speed of light
(phys.org)81 points by bookofjoe 4 days ago | 21 comments
81 points by bookofjoe 4 days ago | 21 comments
Someone 3 hours ago | root | parent | next |
> pixelizing should be relatively simple
https://www.yankodesign.com/2024/07/04/this-crystal-fragment...:
“The Pixel Mirror measures 16mm x 16mm x 10mm, which means it’s small enough to be worn as a pendant in a necklace. Monoli’s series of wearable and handheld prisms are all handmade, and because of the nature of polishing natural stones, they are not perfect square “pixels”. They are handmade to suit the condition of the available stone.”
Discussed on HN in
https://news.ycombinator.com/item?id=40895907 and https://news.ycombinator.com/item?id=40890633
Xmd5a 3 hours ago | root | parent | prev | next |
I've had the same idea for a while, but the classifier I had in mind was for identifying wood ("yep, it's wood"). I wonder if this kind of tech could be used for low-light enhancement (night vision). This would require "injecting" light into the "lens" I guess.
MNIST classifier using a DNN: https://www.nature.com/articles/s44172-024-00211-6
koromak 5 hours ago | root | parent | prev |
I suppose if your lens is actually a vertical stack of lenses, each with hundreds of inputs and outputs, then why wouldn't this work? Although I cannot fathom fabricating it. Maybe start by finding the absolute simplest/smallest image classifier you can
koromak 5 hours ago | root | parent |
EDIT: Of course people are already doing this. D2NN seems to be the keyword, Diffractive Deep Neural Networks
TeMPOraL 4 hours ago | root | parent |
Yes, by "lens" I meant a possibly large stack of optical elements, each designed to perform a single computational step. Kind of like layers of an NN, but etched on sheets of plastic.
Thanks for finding the proper term for this!
infogulch 7 hours ago | prev | next |
Huygens Optics has a great video "Fourier Optics used for Optical Pattern Recognition". He starts with explaining the Fourier Transform wrt audio, then demonstrates a physical filter that screens the resulting convolution pattern to identify matches. Very cool: https://www.youtube.com/watch?v=Y9FZ4igNxNA
esperent 14 hours ago | prev | next |
Edge detection is used a lot in 2d & 3d graphics, microscopy, telescopy, photography, and a ton of other fields.
If this could be done almost instantaneously and without additional energy, it would be very useful.
I can imagine having tiny analogue edge detection component in cameras, microscopes, telescopes, and maybe even graphics cards.
This could be a part of the future of computing - tiny specialized analogue components for specific tasks.
On the other hand, maybe digital compute, whether silicone or futuristic alternatives, will always be cheap enough that the economics will never work out - it'll always just be cheaper to throw more general purpose compute at a problem.
juancn 3 hours ago | root | parent | next |
They made a 7 layer film that does edge detection photonically, original paper: https://pubs.acs.org/doi/10.1021/acsphotonics.4c01667
kidel001 5 hours ago | root | parent | prev | next |
To add onto this... some amount of what they are calling edge detection here seems to overlap with what has already been implemented in microscopy using phase contrast... which has been around since 1932.
0u89e 9 hours ago | root | parent | prev |
I'm not really sure how any of this applies to software development, as they detected edges in actual physical films, which have different layers of chemicals. Still a very interesting approach and that knowledge probably can be transfered to other fields, as humans consist of layers of chemicals, that are different from cars, houses and trees for example, but could you and other people just actually read the paper before writing anything here?
PS The title of article is on a clickbait level, as the meaning for images nowadays does not correspond to what is used in article.
kidel001 5 hours ago | prev | next |
Technically... they are outlining edges at the speed of light. Detection is a separate process entirely.
fzimmermann89 11 hours ago | prev | next |
If I am not mistaken, this is done by modulation in Fourier space. We have already been using this in optical setups for ages - at the speed of light.
The interesting part imo is the implementation of this idea in their work and the efficiency and physical size.
mcherm 14 hours ago | prev | next |
Unless I am missing something, the article doesn't actually explain HOW this was done, nor does it link to a research paper or other source that explains it.
In my mind, this disqualifies this article from being something "that good hackers would find interesting" and thus not appropriate for this site.
(Please someone, tell me I missed the part where they actually explained it.)
owobeid 13 hours ago | root | parent |
The link to the paper is in the first paragraph.
DarmokJalad1701 13 hours ago | root | parent |
"Tamm Plasmon Polaritons" sounds like something from Star Trek technobabble.
TeMPOraL 9 hours ago | root | parent | next |
Technobabble in TNG-era shows was actually quite consistent and even made sense if you squinted at it.
mrandish an hour ago | root | parent |
Yes, there were people on the production staff who, in addition to other duties, were responsible for maintaining as much consistency as possible in both technical descriptions and capabilities with prior episodes and Star Trek canon. Mike Okuda was one such person.
Obviously, the show was fiction and made for a broad general audience, so their technical inputs were sometimes 'streamlined' by the producers to minimize too much technical explication dialogue. But they generally did try to get enough on-screen to be both series-consistent and technically plausible within the fictional universe.
nelblu 12 hours ago | root | parent | prev |
Darmok and Jalad on the ocean!
Galatians4_16 5 hours ago | root | parent |
Shaka, when the walls fell.
uhahohheck 6 hours ago | prev |
OT, but: Oh...wait...just another thursday, ich came here to HN just for the new games, and no... not one single ...no new game at all on the frontpage of HN... so what the heck, busy everyone ?
[modus:with.a.sense.for.the.weekend]
regards,...
TeMPOraL 9 hours ago | next |
Damn, there goes my idea. The other day I wondered how far you could go by embedding classical and neural filters in plastic sheets, to make a "magic lens" that does some img2img transformation in real time. E.g. pixelizing should be relatively simple, but I wonder if you could go as far as transforming a small image classifier (say, "cat" vs. "hot dog") into a physical object, to create a passive "lens" that turns incoming light into words ("cat", "hot dog", "N/A")? Feels like it should be doable.
Anyway, thanks for the link; I'll study the paper and look up at the references.