Gemini Created My First Firefox Extension

I don't know about others, but for me, as someone with a programming background many years ago but without the desire to re-ignite the coding candle, these LLM models are pretty decent from many points of view.

First of all, they are a gold mine for research, reducing the time needed to dig up what you are interested in to a fraction of the time I would have spent during my active years in programming.


Ideogram helped with the image.

Then, for small, relatively straightforward projects, they are as easy as 1-2-3, even if you had no idea where to start from at the beginning.

I just coded (by "I", I really mean Gemini Pro Preview) a simple Firefox extension and got it approved and enabled (for my own use, not published on their website) in a few hours. That's after creating a dev account at Mozilla first and installing something like web-ext to create a signed zip file to load on their site for approval, and upgrading a bunch of stuff. The approval came automatically in a few minutes, but I'm pretty sure it mattered for the quick approval that I wasn't going to distribute the extension on their website. Never done that before, had no idea how to do it.

I also think LLMs (particularly those trained to generate code) are great for creating specific algorithms, not full (more complex) apps, and for debugging purposes, both for code and other stuff, like OS issues.

I am pretty sure they would do at least a decent job for analyzing and providing advice on SEO of a website, but the question is for how long will SEO as we know it still be relevant...

On my latest post on the topic, I talked about some issues I ran into while attempting to build an app over a few days. I haven't continued that project, but I received some good advice on what I could have done better to keep the LLM more consistent over time.

While those are workarounds and can improve the experience working with the LLMs in these situations, they can't handle this use case of development over a longer period very well due to their resource constraints. But they surely can help along the way.

But if you can break a bigger task into multiple ones that can be completed in a reasonable number of prompts, LLMs should be able to carry them out without many issues.

Also, something else I figured out today. I use LLMs on their free plans. At one point, I ran out of prompts for today for Gemini. Since my last prompt didn't even need a context, I continued asking it to ChatGPT, and continued the conversation there. But what I thought of is this: if I use a certain model for coding, to save on prompts, I could ask derived questions to another model. Something like: how to fix XYZ error (in general or in given context)...

Now all I need is put my mind to contribution and see what other small projects should the LLMs help with. I'll start small, when I have time, and if I think any of them would be useful to the Hive community, I'll share them.

Posted Using INLEO



0
0
0.000
5 comments
avatar

That's true, separate in smaller modules or tasks should help not to lose the context due to memory limitations, that's a good trick

0
0
0.000
avatar

And that's how people learn modular programming, if they didn't before, lol.

0
0
0.000
avatar

Congratulations @gadrian! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You have been a buzzy bee and published a post every day of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Our Hive Power Delegations to the May PUM Winners
0
0
0.000
avatar

Yea. The memory limitations are tricky to deal with. I am interested in seeing how things go, but I think that changing models can throw things off quite a bit. So I wonder if doing that back and forth could cause issues.

0
0
0.000
avatar
(Edited)

So I wonder if doing that back and forth could cause issues.

Not if you keep the context self-sufficient. You provide enough context or only use prompts that don't need a context related to the project you are building with the main model.

0
0
0.000