I'd been keeping a list of projects I wanted to see/build some day in a Github repo but now I think a webpage is probably a better spot so here we are. The projects listed here require substantial technical effort hence the descriptions are mostly desired features rather than proper specifications.
After a talk on image recognition API's given by Jack Cox at Stir Trek 2017, I was inspired to develop a program to sort my archives with tags, a concept I had previously tried to do using manual techniques. Upon asking about the use of image recognition systems in this domain, Jack Cox had mentioned a novel idea of using facial recognition to find family likeness. Even simple image recognition would prove incredibly useful for making the Nullbrook archive easy to use.
The end state would be a system that can identify the objects in a given old photo and provide an estimate on the year it was taken. This is obviously going to be flawed since old cameras and film can be used decades after they are created and fashion styles aren't absolute. The initial work on this project may need to segregate photos out by decade to avoid over-fitting on color tones. I don't know how the hell ML works nowadays, heh.
- It would probably be better to train my own model based on photos I tag rather than use systems trained on clear, modern photos.
- Some work in this space:
With the rise of easily doctored photos, videos and audio, it has become increasingly important to validate the source of images and the material that they contain. One idea would be to use cryptographic signatures to identify the source device, location, time, etc of a photo. This could be accomplished through either a key unique to a camera or provided by a user which generates a hash of the user/model/date/location of the photo and can then be integrated into media platform. Said platform would be able to display this information and the hash to prove the authenticity of the photo.
- Certainly a substantial overhead versus simply snapping a photo and uploading it.
A piece of VR software that would allow for the visual display and reactive manipulation of packets, their routes through a graph of a network, and the opening and inspection of their contents through hand motions. Kinda like Gibsonizing Wireshark/Netgraph. A simple prototype would take pcaps, generate a graph and allow for movement of the graph through VR space.
- Ideally, parameters and layers can all be controlled through a HUD-like interface, bonus points for AR.
- Requests can be selected and opened in a way natural to humans unrolling a scroll
- Upon opening, the packet is displayed as a sort of sheet of paper with the headers at the top and the contents at the bottom.