I have studied various Christian religions and have liked the teachings of the Mormons (They currently prefer to be called “members of the restored church of Jesus christ”).

I generally try to abide by 3 Ne 11:29-30. I think my favorite scripture is 1 Ne 11:17 as it answers substantially all questions with faith and humility until you have time to properly study it out.

I am prone to talk about what I believe in a manner that I think gives respect all around like the epicurian paradox, the nicene creed, polygamy and judaism, etc.

I feel like I have a few strengths that I would love to share with those curious: my method to pray in a two-way conversation, my affinity for administration, and the “hiding in plain sight” cheats to be in control during persecution, dreams, and restrictive behavioral loops.

  • 1 Post
  • 9 Comments
Joined 2 years ago
cake
Cake day: December 13th, 2023

help-circle
  • I built a full stack SaaS that is deployed at my work. It is exposed to the internet and I have only used pentesting and asking the ai “what is this” and “fix this” and feature requests.

    It has awful context limitations. Saying “do this” means it overfills context halfway through and loses the nuance as it tries to restart the task after summary. I dont trust it to make a todo list and keep to it. I have to work with the slightly long term “markdown files” as memory.

    I have had good progress when I say “add this pentest_failure/feature_request to an open items list markdown file” then the ai finds context defines the issue and updates the file. Rinse repeat. THEN I say “I want to make a refactor that will fix/implement as many of the open items list issues as possible, can you/the_ai make a refactoring spec”. THEN I carefully review the business logic in the refactoring spec THEN I tell the ai to implement the refactoring spec phase 1 then i test then j say do phase 2… etc.

    Design concerns like single source of truth, dry, separation of concerns, and yagni have come up. I have asked about api security best practices. I have asked about test environment vs production.

    I developed without git, and the sheer amount of dumb duct tape code made by no short term memory ai exposed by pentesting was infuriating, but I got a process that works for my level of understanding.

    Ai Skills, rules, etc are still not quite clear to me


  • So far, I have made a full stack server on a rpi from only ai and pentesting. It is deployed (live and used by my coworkers as a part of their workday). I intend to open source it once I understand more about git.

    I recently have made one pull request on one project that js very well maintained and I indicated that I used ai and it is really eye opening to see what good development/maintaining looks like.

    I am currently doing a deep dive into each of the feedback notes (left by a collaborator). I am by no means fast as I’m trying to max my learning from each scrap of interaction and error. Several concepts are hard for me to remember well so I use my high level memory and commands to the ai each time they come up.





  • I would like to have a way to track my use of FOSS, but i want to retain my privacy. I would be interested in this app. I also would like a different way to allocate so that apps that increase my efficiency so that I don’t spend a long time troubleshooting something get the bigger slice. Perhaps having an optional “impact” survey with varying degrees of granularity (impact survy with only thumbs up and down OR impact survey with only 5 stars OR impact survey with 1-100) honestly this would be really cool if adoption got so high that this became the “patreon” of linux apps (aka having a “like” at the bottom that would remind you of high impact).