Precision LLMs and An Endless Bouquet of Flowers

Precision LLMs and An Endless Bouquet of Flowers

It seems to me that there is so much room to create technology for simple every day enjoyments and growing plants now. One area worth exploring is the customization of what to grow, how to grow it, and a consultant role that LLMs can play in suggesting types of seeds to start in specific locations if you want a particular bouquet of flowers in your home. I know it sounds simple, and not as sophisticated as having AI perform brain surgery on us, but to me, this is the beginning of refining AI so that it meets very specific needs and wants in each person's life (and maybe it also makes me happy). For me, a fresh bouquet of bespoke flowers sounds like as nice a place to start as any.

But we have an issue with LLMs, and I'm sure many have noticed when using them, it is very hard to repeat anything. Also, sometimes the artifact it produces is there only to be consumed as an artifact rather than an ongoing building of knowledge. Case in point: I have generated thousands of images with different programs, and even sold some as clipart on Etsy, but I cannot repeat that image exactly, even with the same prompt. I enjoy this about LLMs, but I also think that in order to progress to another phase in interaction with different use cases, something needs to be retained and repeated, and even trace back in the data so that a linkage of sorts can start to be built as a map of what was intended initially and how it grows over time with use. For what I am building, I think the use cases should be information that is not private and beneficial for a community and nature and so I am starting with gardening.

In order to do this, I want to create a way not only to store information of use for individuals, but rather many pinpoints where the LLM information as it is synthesized and map those moments to other moments that are similar. I would like to suggest that instead of specifically identifying the snippet of data that was used to generate the artifact from the prompt, we start thinking in shapes and assigned vectors to help us aggregate a traceable aggregate that might not be immediately apparent if we were to look at one usage. The shapes themselves are containers to hold information of usage, when the shapes are matched across many other usages, a map begins to form and can be a starting point for aggregation or additions to build findings and information about specific issues. This ensures that even if data in the original model is changed, we still have snippets of a map of use that are beneficial for specific topics. I would imagine that components of the map itself shift and change over time given use so they will need a steady state and ongoing iterations. What I like most about this idea is that it ensures privacy and also aggregation of complex information pathways that can be traced when needed and compared in patterns, even if we cannot see the pattern at the start.

I am considering creating this for what I am working on because I want people to feel as though their information is private but can also create community and be compared with others who might be more or less successful in growing fruits and vegetables in their climate. It also allows me to start thinking about how much energy is used with computing power and start to think of less intensives ways to store information by creating these types of maps, that are essentially reference points.



These shapes I am referring to are a moment of neural networks putting together data from a prompt, or a specific moment of usage for the end user, an artifact generated, but they will, I think, create topography of use that can be associated and repeated. And as the landscape of use evolves, it will be easier to understand how the LLM is functioning. The trick, of course, is going to be how many different shapes and how refined an LLM might be over time, but at the start, I want to create a library for very specific use cases that also ensures the privacy of individuals seeking information. I will continue to experiment more with this in 2024. I cannot yet assure myself that it will work, but I think it would be very interesting. I do think it would conserve computing power though, and as a map of use that can lead to precision, repeatability and comparability, but I am not certain.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.