/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).


Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...


HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second. DEVISING A PLAN Find the connection between the data and the unknown. You may be obliged to consider auxiliary problems if an immediate connection cannot be found. You should obtain eventually a plan of the solution. >Have you seen it before? Or have you seen the same problem in a slightly different form? >Do you know a related problem? Do you know a theorem that could be useful? >Look at the unknown! And try to think of a familiar problem having the same or a similar unknown. >Here is a problem related to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you introduce some auxiliary element in order to make its use possible? >Could you restate the problem? Could you restate it still differently? Go back to definitions. >If you cannot solve the proposed problem try to solve first some related problem. Could you imagine a more accessible related problem? A more general problem? A more special problem? An analogous problem? Could you solve a part of the problem? Keep only a part of the condition, drop the other part; how far is the unknown then determined, how can it vary? Could you derive something useful from the data? Could you think of other data appropriate to determine the unknown? Could you change the unknown or the data, or both if necessary, so that the new unknown and the new data are nearer to each other? >Did you use all the data? Did you use the whole condition? Have you taken into account all essential notions involved in the problem? Third. CARRYING OUT THE PLAN Carry out your plan. >Carrying out your plan of the solution, check each step. Can you see clearly that the step is correct? Can you prove that it is correct? Fourth. LOOKING BACK Examine the solution obtained. >Can you check the result? Can you check the argument? >Can you derive the result differently? Can you see it at a glance? >Can you use the result, or the method, for some other problem? >--- edit: corrected author's name
Edited last time by Chobitsu on 07/08/2020 (Wed) 07:17:36.
I've recently become aware of the field of systemized knowledge and category theory. The 'UNDERSTANDING THE PROBLEM' part is obviously related here. Since that's the first step in the OP, then that's probably where to begin. We have a library index, so maybe we can start working from there?
It occurs to me that systemized knowledge may possibly be not only a means, but an end as well. It stands to reason that if we can successfully use this approach to untangle the confused web of intricacies and dependencies necessary to be able to devise wonderful and appealing robowaifus -- then the robowaifus themselves can use a similar approach to sort out the reality they need to deal with to become good robowaifus.
At the risk of turning this thread into less of a prescriptive reductionism and more into a subjective blog, I'll forge ahead. (Actually I think the issues are related). My chief difficulty here -- the main thing holding me back from having produced a working robowaifu of some sort already here -- is I have the dumb. And frankly, I think all of humanity does too. We all have the dumb. AFAICT, systemized knowledge is a kind of approach to allow us to take in an overly-large, overly-complex topic (creating great robowaifus, say) and break it down into more manageable and 'bite-sized' chunks. I think these smaller bits are easier to digest, mentally speaking. Plus, once a body of knowledge has actually been systemized well enough, then it also accommodates men of varying mental capacities well. Namely, you can 'zoom-in and zoom-out' as it were on the specific sub-topic under consideration. In other words, it's an approach that gives us good abstractions, while still allowing details to be unpacked as needed. And back to the point of 'systemized well enough', once this has been done sufficiently well, it kind of lifts a topic up out of the realm of mere hearsay, and more into the realm of a legitimate hypothesis. In other words, it's becoming more like a scientific theory at that stage. While this isn't any particular panacea in and of itself (I think I maintain a healthy skepticism of science in general at least for the politically-charged topics) it does begin to lend some established rigor which lays a foundation from which to build further progress upon. >tl;dr This way can help us smol the dumb a little. :^)
>>8751 > it's becoming more like a scientific model at that stage.*
Potentially related. >On the Relevance of Design Knowledge for Design-Oriented Business and Information Systems Engineering >Conceptual Foundations, Application Example, and Implications >The engineering-based development of techniques in business and information systems engineering (BISE) requires knowledge on the part of the system designer. The paper points out the importance of this design knowledge in the course of scientific design processes and provides a framework for systemizing design knowledge. The framework is used to explain scientific design knowledge about the modeling technique of event-driven process chains. Implications of design knowledge in the context of BISE conclude the contribution.
>The evolution, challenges, and future of knowledge representation in product design systems >abstract >Product design is a highly involved, often ill-defined, complex and iterative process, and the needs and specifications of the required artifact get more refined only as the design process moves toward its goal. An effective computer support tool that helps the designer make better-informed decisions requires efficient knowledge representation schemes. In today’s world, there is a virtual explosion in the amount of raw data available to the designer, and knowledge representation is critical in order to sift through this data and make sense of it. In addition, the need to stay competitive has shrunk product development time through the use of simultaneous and collaborative design processes, which depend on effective transfer of knowledge between teams. Finally, the awareness that decisions made early in the design process have a higher impact in terms of energy, cost, and sustainability, has resulted in the need to project knowledge typically required in the later stages of design to the earlier stages. Research in design rationale systems, product families, systems engineering, and ontology engineering has sought to capture knowledge from earlier product design decisions, from the breakdown of product functions and associated physical features, and from customer requirements and feedback reports. VR (Virtual reality) systems and multidisciplinary modeling have enabled the simulation of scenarios in the manufacture, assembly, and use of the product. This has helped capture vital knowledge from these stages of the product life and use it in design validation and testing. While there have been considerable and significant developments in knowledge capture and representation in product design, it is useful to sometimes review our position in the area, study the evolution of research in product design, and from past and current trends, try and foresee future developments. The goal of this paper is thus to review both our understanding of the field and the support tools that exist for the purpose, and identify the trends and possible directions research can evolve in the future.
>>8753 Ar you sure you can use that somehow, or is it some rabbit hole you want to jump into out of curiosity? On a quick glance, it strikes me as very theoretical. >>8754 Okay, the picture helps.
Just stumbled over this, which might fit in here. It's about Steve Jobs on being smart: Zooming out to find connections, unique life experiences, openness to experience, being rather extroverted, avid reading to change perspectives (especially if one isn't extrovert) or gather experiences in other ways: https://youtu.be/e46qMomIT8Y
>>8755 >Ar you sure you can use that somehow, or is it some rabbit hole you want to jump into out of curiosity? On a quick glance, it strikes me as very theoretical. No, I'm not 'sure' of anything at this point. I'm simply trying to explore prior art towards systemized knowledge. I feel pretty sure we need help in this area and I'm trying to explore a new breakthrough for us all. I think most of us have been overwhelmed in the past at the sheer volume of topics involved here (I know I have), and I think that effect has slowed our progress as a group. If we can spell out more clearly a methodical approach for everyone, then it would help a number of us. The RDD >>3001 was an overview way of it before, but we need to begin fleshing out some scientific/engineering rigor. The Robowaifu Systems Engineering thread is probably at least indirectly related as well >>4639 . After all, we're not the first group to tackle a large project, and thankfully there's a lot of information out there from past design/engineering/production/manufacturing groups. I'm simply trying to figure out a way we can capitalize on that information here. >>8756 Neat, I'll give it a watch Anon. BTW, while I admire 'smart' people in general, and hope to be smarter some day, my real goal here is to discover a good methodology that us not-so-smart Anons can follow and still succeed at crafting robowaifus. I hope we can manage to find such an approach here.
>>8756 That was good. Motivates me to increase my general reading levels up to my past standards at the least. Coincidentally, I had already been personally trending towards that during this new year, so yea.
>>4143 Did you look into Unified Modeling Language (UML)? https://en.m.wikipedia.org/wiki/Unified_Modeling_Language or a alternative https://en.m.wikipedia.org/wiki/Modeling_language e.g. SysML? I didn't look much into it. UML is supported very well by standard Debian installations, including grafical editors and libriaries for programming languages.
>>8771 Yes, I'm sure that UML et al would be pretty useful for this arena of endeavor. Personally, I'm much more fond of Terry Halpin's ORM (Object Role Modeling). >>2303 >>2307 >>2308 I think it's graphical syntax is far more intuitive and more flexible than UML. Regardless, either would be helpful. Actually, I hope to create a robowaifu AI development system based around Halpin's ORM as a GUI based system to 'wire' together knowledge representations in a way that will hopefully be both easy-ish to read and to reason about. Thanks Anon, good suggestion!
>>8773 Okay, but there's nothing to work with in the standard repositories for Debian (Raspian). Wouldnhave made things easier.
>>8785 Ahh, true. Actually I haven't written such a tool yet. There's kind of a low-energy community around this 'language', but afaict it's just business boomers trying to foolproof their SQL systems. There's also a tool written by an old guy called NORMA that basically acts as a plugin for Visual Studio. AFAICT, I'm literally the only person who recognizes how valuable this could be for AI usage to allow non-experts to assemble knowledge representations. Unless we here do it, it probably will never happen. https://github.com/ormsolutions/NORMA https://www.ormfoundation.org/
>>8787 >>8789 Thanks. I might look into it a some time, though if there are no tools the. I think for organizing stuff, UML will probably work better. Also, the link didn't work. https://youtube.com/playlist?list=PLzr5fRV1AGV9EBDnqI73HiI39KggzWX3y https://youtube.com/playlist?list=PLxumuDj9hbvrLM_GMPFC8TZdTtcJQyFtB
>>8844 >Also, the link didn't work. Ahh, my apologies. Thanks for catching that Anon!
>>4143 Hardware advances first before software. So two take the first bute we need to advance our hardware to quantum computing levels. Then make AI personality software to become as humans as possible. Then a body software to control the body. That is the short and overly simplified
>>9129 Hmm, maybe you have a good point Anon. But OTOH, hardware is pretty hard for us individually to advance very far (yet). But it seems like software is something that we ourselves each can do something with pretty soon. What do you think?
>>9131 You are right, one person cant design better polymer for plastic, better design for servos, or better boards. So yes right now each one of us can make better software like AI chats, muscle and movement simulations, even programs to get each if us to add a training cycle for a bigger AI training program. I am just saying in the grand view of things the hardware needs to advance to handle our robowaifus. What we as an individual do is make present waifus like elf sofie amd set that as a base parameter. When we know what we are at we know what routes we can go down. Besides that we can work at AI and robot thinktanks, invest in promising companys to hope they make a way through the fields we cant go down. We can even just advertive the idea of robowaifus.
>>9134 >When we know what we are at we know what routes we can go down. True enough. Always good to start where we are with what we have on hand Anon. >We can even just advertive the idea of robowaifus. We have had a few ideas about that on that one thread, and we did make a few contacts on other IBs. But my belief is once we create a basic robowaifu kit that costs ~US$2K to build from scratch, and it can run decent chatbot software and move around in a basic way and do 'judgement tasks' (like washing dishes) successfully -- in short order we'll have so much traffic here by simple word of mouth it will be like a zerg rush. 100's of anons from all over already are vaguely aware of this board.
>>9138 I like to think of this: if we want people to come into the topic and contribute(important part) then we need something to grab them and hook them in. You are right with a kit. I say that kit should be a simple companion bot for old people or something like roomba. If we got fully humaniod we lose them because they think it is a sex dolls. So two hook the masses we need a cute, slight more robot than humanoid that has good companion features. People love roomba so if we give them something like that with basic emotions? They will eat it out the palm of our hands. Besides that we can get artists involve and scream praoganda from the roofs. The more we can convince people that robots are more human or more likeable is the goal. Go to conventions, webinars, shit even streaming helps. Break the illusion and they will spread us by themselves
>>9147 >They will eat it out the palm of our hands. Kek. Hardly my agenda, personally. My goal here is altruistic basically. However, you might be interested in /biz/ threads anon started. Have a look: >>3119 >>1642 >Break the illusion and they will spread us by themselves Very solid point Anon, I like that idea.
>>9151 When I mean when I say break the illusion I mean as a two fold illusion. The first and simplest one is women having sex and relationship over our head. We break the hold women have over men and we even the field to being an age where being a single sex is meaningless and having a personality and having good values matter more then triple D's. The second is the illusion of human relationships. You being on this board and into the subjects will have heard of himans developing emotions for inanimate objects. Case in point is roombas, there are people that consider them one of their own family and get depressed if it gets broken. I even read an account from the old /clang/ board on 8kun that a soldier got emotional when a bomb defusing robot got broken and was asking the army mechanic to not get a new one, but rather to fix his partner. Humans make connectstions and when people realise you don't need to be dependent on people. One example of humans needing a new outlet for emotions and relationship is escorts. If you read some escort accounts of men just wanting emotional bonding because humans are judgemental and when we as a species have a better way to build emotional bonds like we do with pets and love ones then we will see less mental health decline.
>>9241 >I even read an account from the old /clang/ board on 8kun that a soldier got emotional when a bomb defusing robot got broken and was asking the army mechanic to not get a new one, but rather to fix his partner. I hope we can find sauce on that Anon. It's a pretty notable example of the sentimental attachment we can have for non-human things. Any yep, I pretty much assumed that you explained about the meaning of your phrase. No doubt about it; desirable, appealing robowaifus will overturn a boatload of (((systems and plots))) that are intended to abuse and capitalize on men's ability to create. The men are the only ones suffering under the current schemes. Great robowaifus will change all that. So, we're way off-topic ITT. If you'd like to continue this, I'd suggest the basement as the right venue for it Anon.
>>9242 Alright. Though I put more thought in the question and that lead to more questions. The overall question of how will we get robowaifus is one that spawns questions like "when will the AI be good enough to be self sufficient, when will the materials to make the robowaifu be cheap enough to make them." So I have to say that we need to focus on two groups. Hardware and software. Hardware will work on the circuits and body while the software group will work on the emotions and personality.
>>9505 >Alright. Though I put more thought in the question and that lead to more questions. No, that's perfectly alright Anon. That's just part of what comes along with the territory for using IBs to work together creating robowaifus. And as I mentioned in my 'vision statement' posts (>>2701, >>2741), the benefits of doing so far outweigh the issues involved. OTOH, that also means we have to keep a tight reign here for tracking and staying on-topic within threads. /robowaifu/ is primarily an engineering board, and all engineers need good documentation to do our jobs well. This is a very complex topic, and the board itself is currently our primary 'document' of all our efforts here. And the catalog page is it's main access point for finding information here -- kind of like a table of contents in a book. By keeping things on topic, it's like we're forming good 'chapters' for our 'book' here. Makes sense? >...So I have to say that we need to focus on two groups. Hardware and software. OK. Good, on-topic point. :^) Still, we'll need to sub-divide & further sub-divide both of those domains to get the problems down to 'bite-sized' chunks we're likely to make good progress on relatively quickly, and of which such work can be distributed among us for greater progress rates as well. But that's a really good starting place Anon. Can you share more ideas here about the circuits and body, and the emotions and personality? What smaller parts of these can we be thinking about r/n to help solve the problems we all face in creating robowaifus?
>>9516 So I will break at down, for hardware we have eletrcical and mechanical, from there in electrical we have motion, vision, and battery. We need to work on electrical responces to move the body smoothly and efficiently. We need to make the waifu see and understand what she sees. The battery part need to be worked on to actually run the waifu for a long time without being bulky. For mechanical we have material, skin, skeleton. Takimg the simplest approach we have the skeleton of the waifu that needs to be lightweight, durable, and able to hold the wires for a long time. Next is skin which honestly can be whatever peo pople want, but in general we want cheap, light, and study skin. Now materials brings it all togther because we need to work on turning the materials we have with us now (aluminum, 3d print plastic, copper, resins) and find better materials that are atronger and cheaper. On the software side we have body related software and personality related software. Body would be broken down into movement, vision, and expression. Movement software to stabilize her, let her walk and grab things. Vision to actually makes sense of camera data and help update plans with new information. Expression software for the face and body language so she can be sad or confident. On personality software we have voice, personality, and 9emotions. We need to hace her have a good voice, best idea would be a voice AI that rveolves to mske voices as human and natural as possible. Personality to let her have a spark of life, you can be rudimentary with it snd put in gestures or tone that she will default to. Emotions to not have a sarcastic or dead eye look. That is what i think helps. But break at down more.
>>9524 This is a great list Anon! Almost like you've been on /robowaifu/ before or something... :^) > But break at down more. Alright. First I'll take your post and kind of itemize it. Then I'll plan to make another post that takes that items listing and references related xposts. Sound good? I'll also go ahead and break down the software side into one more category: Planning and Awareness software, and also add other items like Hearing. Electrical Motion -We need to work on electrical responses to move the body smoothly and efficiently. Vision -We need to make the waifu see Hearing -We need to make the waifu hear. Various types of microphones. Battery -run the waifu for a long time without being bulky. Sensors & Encoders -Allow the waifu to have touch/heat/smoke/etc. senses -Allow the waifu to 'instinctively' know her joint's angles, posture, etc. Mechanical Material -we need to work on turning the materials we have with us now (aluminum, 3d print plastic, copper, resins) and find better materials that are stronger and cheaper. Skin -can be whatever people want, but in general we want cheap, light, and sturdy skin. Skeleton -needs to be lightweight, durable, and able to hold the wires for a long time. Software Body related software -Movement software to stabilize her, let her walk and grab things -Expression software for the face and body language so she can be sad or confident. put in gestures or tone that she will default to Personality related software -Voice software. We need to have her have a good voice, voices as human and natural as possible. -Personality to let her have a spark of life -Emotions to not have a sarcastic or dead eye look Planning and Awareness software -Vision to actually makes sense of camera data -understand what she sees. -and help update plans with new information -Sensor-fusion of various types to integrate body & environment information >=== -various reorganization edits -add Sensors & Encoders category -add Hearing category -various prose edits
Edited last time by Chobitsu on 04/07/2021 (Wed) 03:59:34.
i knew I had some old document with notes somewhere, from some years ago. Wanted to make a thread of it's own one day, but it looks a bit like you are doing here. I had a concept of requirement levels, to plan some path for different abilities, also allowing for picking options of course, since not everyone needs all of it. I made a list of what they should be able to do or traits they should have and then define levels with numbering. Each level can have more than one trait, even in the same area, it's just something like a priority list. Even back then, the idea was that every developer could make their own base on it. I think I even thought about making a web page to help with that. I made a strange separation back then, though. Like there were more humanoid fembots and anime-like robotwaifus, but the later would be defined by having something distinctive from humans, e.g. led eyes or rolerskate feet. I would't make this distinction in the same way anymore, but posting the old list anyways.
>>9554 Requirement Levels Humanoids, Human-like, Fembots: - Body movement RL00 - moving head, legs and arms, but allowing them to be moved RL01 - walking on all four RL02 - dancing on one spot with guidance from a wall or pole RL03 - more complex dancing while using a pole, with one foot on the ground RL?? - standing up with help RL?? - standig up alone RL04 - using electric rollerskate boots, which communicate with her for balance RL05 - advanced pole dancing with one leg on the ground RL05 - walking on legs with help RL06 - walking on legs on her own but using guidance eg. from walls RL07 - walking on legs without help or walls RL08 - poledaning with no foot on the ground RL09 - more and more complex dancing moves without pole RL11 - climbing stairs RL11 - walking in high heels RL11 - dancing in high heels and samba RL11 - jumping walk RL20 - ballet and gymnastics - Facial expressions and abilities RL00 - nice looking smile RL01 - general cute facing RL02 - moveable chaw RL03 - lips moving while talking RL04 - talking in a realistic looking way RL05 - (french) kissing, inclusive non-toxic salvia RL06 - blowjob and similiar, selfcleaning afterwards RG20 - like Cameron (TSCC, Summer Glau) or Buffybot (SMG) - Endurance RL00 - movement in bed for one hour, some time talking and min. movement RL01 - ... RL09 - 16 hours without walking much (like outside) RL10 - 16 hours including walking or dancing for s. time - Skin and tissue RL00 - less sticky than thermoplastic RL00 - not looking glossy RL01 - random (individual) skin pattern like spots, freckles, (pseudo-)venals RL01 - no relavant quality loss within 10 years RL02 - heating of skin and particular tissue by venals or skin layer RL03 - sensing of touch RL?? - sensing of pain RL?? - sensing of pressure RL?? - pressure marks if pressed RL?? - sensing of needles going to tissue in every body-part, self-healing RL?? - enhanced resistance eg. if being spanked every day RL?? - no relevant quality loss in x years under heavy usage eg. sex, dancing, spanking Rl?? - visible muscle movement under the skin, similar to human, especially upper legs RL?? - (partial) copies of the bot have to look like the original, Reproducabitity - Comfort - internal self cleaning by drinking water and cleaning fluid - internal storage of cleaning fluid and also lubricant for sexual usage - release of internal fluids in bathtub, shower or on toilet - showering or bathing of her own if demanded by her owner - internal self cleaning without immediate need of visiting the bathroom - Hands and tactile sensing - realistic looks and movement - tactile sensing - different forms of massages - Mind - internal computer, controlers, external computer(s) at home or cloud - personal mind with personality and mods with backup function - hive mind to share non private data with other bots - machine learning on external server at home, like dreaming - free software as much as possible - small programms with good APIs, Unix style - Eyes and visual recognition RL00 - moveable pupils - nearfield face recognition - looking at and following something - good enough for reading - recognising things more then some meters away - maybe separated computer that processes input for security reasons - advance recognition system, face, voice, bodysize, other traits - Interfaces - wifi for external servers as part of the brain - segregation of different systems for security reasons - maybe plug for batteries in a backpack for outdoor activities - external sensors in wearable objects like hairclips (via bluetooth?) - Sexual usability RG00 - 1 usable orifice RG01 - 2 usable orifices RG02 - 3 usable orifices
>>9555 - lots of different internal "muscles", massage rings, etc - automatic realase of lubricant RG10 - being on top while having sex, will need more strength and coordination - Hearing and natural language processing - understanding as much as possible by only using the internal computer(s) - storing longer conversations, transfer to external computer for learning - additional noise sensors might be usefull, for faster reactions? - maybe separated computer that processes input for security reasons - Other essential traits for realism - natural behaviour considering body movement - automatical human-like positioning - knowledge about natural and erotic posing - non-toxic salvia, in self-cleaning mode replaced by cleaning fluid - Additional traits for enhanced realism - sense of taste - simulated breathing - crying including (salty) tears - talking by using air to form the voice - advanced usage of hands from gaming to piano - simulated eating and going to toilet on their own - specific sweating incl. optional saltyness - female pheromones - weight and balance management by internal fluid - dampening of internal sounds by using noise cancelation or other methods - General traits or abilities - recycling of expensive parts of the body, when upgraded - easy removable batteries, controllers, and main computer - Less important, unimportant or special traits and abilities - sense of smell, maybe for security reasons or chores - heat storage, maybe from sun, more likely while recharging, for later use - superior abilities which are easy to archive eg. superhearing - extreme long lasting batteries, maybe based on chemical liquid - taking part in (outdoor) sports or playing with children - watersports, swimming, surfing maybe even diving - alternative energy usage eg. solar, food, salt - rechargable by induction (without cable) - giving milk to a real baby while simulating a heartbeat - producing female milk and/or pheromones by inhabiting GMOs - makeup-like changes in color of some body parts - ability to put on makeup on her own - solar protected skin for outdoor activities - friendly microbes, GMO, maybe with additional traits - self-defence and runaway reflexes, lockdown of orifices and mind - emergency call, reanimation procedure, opening of the door - Unrealistic or most likely pointless requirements - real human skin and other cyborg parts Robowaifus, Animenoids, Anime-like: - Body movement - like the Human-likes but - maybe having embedded rollerskating wheels - Facial Expressions - like the Human-likes but - screen-like eyes for showing emotions - screen-like mouth for showing emotions - cheek-LEDs under the skin for showing emotions
>>9554 >>9555 >>9556 Great stuff Anon. Thanks for taking the time to dig this up and post it here. I wonder if we can borrow some general categorization rigor here? Roget's Thesaurus, or perhaps the Dewey Decimal System? I'll spend time thinking how we can integrate all these lists into the RDD >>3001 .
>>9568 >borrow some general categorization rigor here? I put it here so others can integrate it in their system of sorting things out.
>>9586 I see. Well, thank you for that Anon it's a most helpful listing. I've been toying around for a while now about how we might go about rigorously categorizing the literally hundreds of different topics that /robowaifu/ overall at least touches upon. For example, our Library thread is a bit of a mess ATM IMO, and one I'd like to see cleaned up effectively by our 5th birthday here. My guess is that one of these classic works might help us all out in that sense. Thanks again.
We also need to break down auxiliary related items like plastic, cuirct board and metal production.
>>9894 Very good point Anon. I would also add that various manufacturing techniques should be done as well. For example small-scale factory production runs inside a garage lab using kits, vs. semi-automated manufacturing for a small scale business. Manufacturing itself can be an art & science.
How do you organize your PDFs and other books or papers? Not sure if this is the right thread, but it seems to be about organizing stuff. I just thought repeatedly about wanting a method of being able to extract the titles from my PDFs for machine learning and other RW related topics, which I download all the time. I tried pdfx, which didn't work because it only extracts meta data which might not have been put into the file by the author. Basically, I didn't even get the title from the file I tested it with. Then I looked into the arxiv.py library for Python3, which I got from pip3 install arxiv. It's badly documented, in regards to the internal help, but it has a helpful github page: https://github.com/lukasschwab/arxiv.py This is just some kind of feedparser. So I put it into a function, using it: import arxiv objects = ['entry_id', 'updated', 'published', 'title', 'authors', 'summary', 'comment', 'journal_ref', 'doi', 'primary_category', 'categories', 'links', '_raw'] def getarxiv(id,object): if id.endswith('pdf'): id = '.'.join(id.split('/')[-1].split('.')[:-1]) if id.find('.') is 4: search = arxiv.Search(id_list=[id]) paper = next(search.get()) if object in objects: print(eval('paper.' + object)) else: print('Try: paper. + ', objects) Which can be used like this: $ getarxiv('2006.04768',"title") > Linformer: Self-Attention with Linear Complexity Also works with the file path instead of the id based name. It can also get the summary and other stuff. I'm not exactly sure how I''m going to use that, but one use case would be having a text or html file with all the summaries for the papers I have, maybe with tags based in which folder they are or what keywords are in the title. We'll see. it's certainly going to help to find or sort the papers, or to put up a posting or even a website with the summaries and links to the download of each. The other program, pdfx also extracts links to papers which are referenced in the input document. So if one want's to batch download them just in case, it could be easier that way.
>>10317 Very good question Anon, and a nicely-fleshed out one as well. This is definitely a good topic for this thread, I'd say. But you might run over this one too for ideas and examples, even though it's not precisely the same alignment: (>>2300). There's also another thread that's a bit more aligned, but hasn't made it across from the migration, so not much content yet (>>269). Good luck Anon. Trying to 'catalog' the whole board itself has some similar difficulties, and our Library thread (>>7143) shows that it can be a rather messy process I'm the OP of that one, so I can say that pretty unabashedly :^). But you've touched on a very important topic to us all in your attempt, Anon. So Godspeed to your efforts, please figure it out! I know our catalog could use some help while we're at it. :^) > The thing that did impress me was the organization behind it. I asked Joe about it. He sang to his microphone and we went on a galloping tour of their “Congressional Library.” > Dad claims that library science is the foundation of all sciences just as math is the key-and that we will survive or founder, depending on how well the librarians do their jobs. Librarians didn’t look glamorous to me but maybe Dad had hit on a not very obvious truth. > This “library” had hundreds, maybe thousands, of Vegans viewing pictures and listening to sound tracks, each with a silvery sphere in front of him. Joe said they were “telling the memory.” This was equivalent to typing a card for a library’s catalog, except that the result was more like a memory path in brain cells-nine-tenths of that building was an electronic brain. https://metallicman.com/laoban4site/have-spacesuit-will-travel-full-text-by-robert-heinlein/ The key line here is >and that we will survive or founder, depending on how well the librarians do their jobs. This point has stuck with me ever since reading this wonderful book, and in many ways really inspired me to tackle our library thread, even though I didn't feel up to it. We are all attempting something here that has never been done before in history. The complexity involved I consider pretty staggering, personally. I wouldn't have it any other way heh -- otherwise, why even bother? :^) I feel like know our ability to organize our learning-curve achievements will be vital to our success in the end.
>>10317 I use to just dump abstracts and keywords into a text file but recently I started using Zim which is a wiki-like graphical text editor. You can insert images, graph diagrams, code and equations and also link to PDF files and code. It makes it super easy to organize ideas and research and you can search through everything as well. Pages can also have sub-pages for further organization. https://zim-wiki.org/ The source code of pages is simple and you could easily generate pages for Arxiv PDFs you already have saved with the API, then fill them in with relevant links and notes later. If you want to insert code and equations, enable the Source View and Equation plugins. There are also Task and Journal plugins which are good for keeping notes on experiments and directing your progress.
>>10331 >zim I already know that program. Thanks for the reminder. Now I'll look into it again, since it's still around. (Wasn't using it bc switched computers and my old disc is encrypted and I don't remember the exact PW. That's why I forgot about the program. One thing I want in the future, is the script or the OS making a textfile with all programs intalled. So one can easily recreate the same OS.)
>>10335 >One thing I want in the future, is the script or the OS making a textfile with all programs intalled. So one can easily recreate the same OS.) My apologies that I can't remember it Anon, but a few years back when I was still on Linux Mint, there was an explicit tool that would run through your program setups and system config, and then record that out to a re-installation script. The explicit intent was to quickly and simply allow an Anon to nuke (or lose) a box, but be able to reinstall everything fresh from scratch with practically no muss or fuss. Again, sorry I don't remember it, but again, it was available in the Linux Mint repos. (Therefore, possibly in the upstream Ubuntu / Debian ones).
>>10331 Wow that sounds amazing Anon, thanks.
Open file (68.08 KB 1182x763 wall of information.PNG)
N00b with 0 practical experience with AI with a bit of an idea. I was gonna put this in the AI design thread, but seeing as it's more a structural question than a nitty-gritty AI question, thought it'd do here. Say you have a chatbot style AI developed. It can take in external information in text, and return information back to the user in text. Before the output text reaches the user, it's run through a script that checks for commands, and when it detects one, triggers an action that the robowaifu body carries out. These actions aren't manually completed by the AI, and instead are pre-scripted or carried out by a dedicated movement AI. Is it possible to train the chatbot AI to consistently understand how to send out commands accurately? How do you incorporate that sort of thing into training data? And, in another way, is it possible to take a robowaifu's senses and pipe them into a chatbot's interface via text in the same manner? Pic related is a better way of explaining it. Is this model feasible, or would an in/out system like this hamper training speed to a no longer viable amount? I know that there's obviously more steps in the chain to this (for one, an always-open microphone will confuse the AI into thinking you're always talking to it, so there has to be an "are you talking to me?" filter in the path), but given this rough draft, is such a model possible with the technology that the average anon has (barring RW@home that other anons have suggested)?
>>10357 I'm not knowledgeable enough ATP to answer your AI-specific questions, but the >And, in another way, is it possible to take a robowaifu's senses and pipe them into a chatbot's interface via text in the same manner? question I can pretty confidently answer with a 'yes', since it really involves little more than sending properly-written signaling data to the display. >diagram I really like your diagram Anon, one minor suggestion: I'd say you could combine the two blocks containing 'Typo Correction' into a single 'Typo Correction/Error Checking' block, that sits before the 'Text Analyzer' block. >Is this model feasible, or would an in/out system like this hamper training speed to a no longer viable amount? Yes, I think that's likely to be a reasonable approximation at this point lad. It will take many, many more additions (and revisions) to flesh it out fully in the end. But you're certainly on the right track I'd say. >is such a model possible with the technology that the average anon has Since a general definition of 'average anon' is pretty much an impossibility, I'd suggest a rough, reasonably adequate, target user definition as being: An Anon who has one or two SBCs and some rechargeable batteries, dedicated specifically to his robowaifu's exclusive use. If it takes anything more than this hardware-wise to work out the AI/chat part of a robowaifu's systems, then that would basically exclude the (much-higher numbers of) impoverished/low-income men around the world (>>10315, >>10319). I'd suggest that it be a fundamental goal here on /robowaifu/ to attempt the AI/Chat system be targeted specifically for the Raspberry Pi SBC. Not only would that be a good end-product goal to target, but it also has advantages for us as designers and developers as well. (>>4969) >Once we're finished each of you will have your own little development exploration box you can literally carry around in your pocket. It will be self-contained, independent, and won't interfere with your other computing/vidya platforms. It will offer you a convenient way to begin controlling embedded hardware directly on the same machine that you write software for it on.
>>10357 >consistently understand how to send out commands accurately? If you have the command and it's parameters stored in some text then you should be able to send it to the e.g. servo controllers. However, if it's about moving around it would certainly have different parameters dependent on the situation. Even more so, if it's a high level command which has man sub-commands and requires recognizing objects and planing motion, it's way more difficult. What happens in you text analyzer, and from there to the action, will be very complex. You can have a command like lift-right-arm, but then, how much? Which angle for each joint? What if something is in the way? We have a thread for chatbots >>22 which became more and more one for general AI, also one for AI concepts, and one for GPT-2/3. >take a robowaifu's senses and pipe them This would be some kind of context. > into a chatbot's interface via text What does chatbot-style-AI mean? Some already existing system? You can do kind if everything with code. If the other parts of the system know what it means, then they can use it. > hamper training speed Your system doesn't look like some ML model. I also don't see how we could build any AI like one model. It needs to be various pieces of software communicating with each other. Also, forget about the distinction between basic and complex actions. Your basic actions aren't basic.
>>10400 Not him Anon, but this is an insightful post. Many anons neglect the complexity & judgement involved in even a 'basic' movement for a robowaifu (or for us). Picking up a dish and putting it into a dish sink for instance, is actually quite a complex, interconnected, set of tasks that all have to be planned out and sequenced in proper order, and then carried out with precision and finesse. Our own visual/reasoning/neuro/musculo/skeletal systems have been designed to do these kinds movements, and from our births have been fine-tuned and perfected over years of time. But now we ourselves -- us designers & engineers -- will have to figure out exactly how to work each of these steps out in detail ahead of time.
>us designers & engineers -- will have to figure out exactly how to work each of these steps out in detail ahead of time. Oh, I hope not. We need to get to a point where a robowaifu can learn doing things on her own. I only want to get close with programming, not creating all of it. There's something that is called pose estimation, we will hopefully be able to use something like this to make her learn from videos. https://youtu.be/F84jaIR5Uxc
>>10422 Well, I suppose it could narrowed down from we to someone, Anon. But the simple fact is machines have no 'instincts'. As to 'learn doing things on her own', then again, someone will have to devise that ability. For now, animatronics-like approaches (specifying every little detail) is our surest approach to functionality. This will get progressively easier for us all, as we have lots of 'baseline' robowaifus out there of this type, and lots of smart men begin thinking hard about it all. While I'm no expert in the field, I seriously doubt that any actual, working (non solely-academic) AI/ML engineers out there would claim there's anything even remotely like an AGI (or w/e they're calling it these days) in existence. And without careful, dedicated & meticulous attention to details, nothing happens in this world filled with entropy. It will be up to us Anons or other men to solve this systems problem -- it certainly won't solve itself! :^) This is the task ahead of us, plain and simple. >pose estimation Yep, that's a good feature to pursue for us all. It's a great way to simplify the complexity of the kinematics, situational awareness, and motion planning problems. There are likely some other benefits as well.
>>10425 What I meant was that we won't need to design every move to every detail, only some estimation. The basic idea is, that the system would learn by trying and observing the result. For example grabbing something: Closing fingers would be programmed, how many and how much (for each) are parameters. These parameters can be changed, and therefore have some effect. Maybe we would first try it in a simulation, then with the real arm and hand. Object detection exists, then she could try until she lifts it of and holds it. The object getting closer means confirmation. Or sensors in the hand confirm some object still being there, after lifting the arm. Ideally we'll have several ways to confirm some change. If we do it like that, no observation by the owner would be necessary, while the system learns. I would also prefer to grab data from me doing things with some glove which measures hand movements and such, instead of programming it. However, the basic movements shouldn't be such a problem to write down anyways. After that it might work with pose estimation and the way I described above. Optimizing something like grabbing could then be defined by having more contact to the object at every time, but then not squeezing it, and holding it in a correct way (e.g. plates with food). Or not to much contact, but sufficient, maybe for reason of hygiene. How to handle each object would be determined by the object detection and the knowledge (probably via some graph database) about the object.
First post on the board. I hope this is relevant for this thread but I had what I think is a very very important idea while reading this board about how things must be as abstracted as possible in the AI herself. For example, the "personality" component should not have to dictate the exact electrical parameters to every motor, but she should just have to will her arm to move as we do. Perhaps this would involve something like a "world model" like I found on this board with things like this https://worldmodels.github.io/ . This idea that the mind would only deliver abstract ideas to another "component", perhaps even a more basic machine learning model, kind of suggests the idea of multiple computers within the same AI system. Again I hope this is relevant enough as I couldn't think of anywhere else to put it, haven't seen anyone mention the concept except people talking about world models, and wanted to share the idea because it could make breaking down of the problem easier. I'm eager to hear your opinions.
Also, I'm considering making some diagrams. I already started one about all existing robowaifu projects. This is currently kind of stuck, because I would need to collect more data on each one. I will care about that one later. One new idea is to make a diagram for how to get started with robowaifu development. I post the first ideas here, maybe someone has suggestions how to improve it. I want to model it in text to some extend before I start putting it onto a PNG diagram. Of course, the diagram can't go too deep into each topic, this would be something for other diagrams. | Python / basic math -- statistics / linear algebra -- ML -- DL -- NLP -- NLTK -- ML | graph databases -- RDF primer -- programming | electronic basics -- Arduino + embedded programming -- motors / servos / sensors / energy systems | programming -- CPP/Python/Lisp -- <List of concepts> | 3D design -- CAD / sculpting -- Blender / Solvespace -- 3D printing | molding -- clay modelling -- silicone / plastic resins | plastics -- 3D printing / thermoplastic modeling / resin molding | conversational AI -- AIML (limitations) -- programming -- ML / NLP / graph databases / text generators (GPT) -- speech recognition / speech sythesis | face design - 3D design / molding / generative networks | motion -- electronics / programming + simulation -- actuators -- object detection / object categorization / situational awareness / navigation / sensors -- walking | advanced AI -- psychology / philosophy / cognition | skeleton -- plastics / 3D modelling | vision -- electronics -- object detection | skin -- silicone / textiles / sensors
>>10357 (me) >>10400 >What does chatbot-style-AI mean? Some already existing system? Basically, Cleverbot, Evie, Replika, anything that has a user input text, then responds with an AI-derived response to mimick a back-and-forth conversation. I had the thought of a model like this to allow hot-swappable AIs, just so if a newer, better-coded AI comes to light, as long as it has the same basic text-in, text-out system, it can be swapped in and trained to utilize the rest of its body. >>10427 >This idea that the mind would only deliver abstract ideas to another "component", perhaps even a more basic machine learning model This is what I was trying to get at. Instead of forcing the chatbot AI (which is designed first and foremost to speak like a human, not move like a human) to learn the nitty gritty of each action down at the metal (move ABC servo XYZ degrees, move DEF servo XYZ degrees, etc) it calls out an abstract command that other code can pick up on and carry out in place of the AI directly. The chatbot isn't moving, the action handler is, and all the chatbot has to do is invoke a command, and the action handler can then carry it out. Granted, this leaves a whole lot open to interpretation from the action handler, but there can be other information that text analysis can give that can influence how the action handler carries out its actions aside from just the command invocation (like those ML scripts that can predict emotion behind words into a set of confidence values, which can be plugged in and used to further give emotion to movement by understanding the AI's mood.) Full, direct-control of the waifubody by the AI would be cool, but the level of computing power, effort in training data, and effort in building a virtual training environment needed to train the AI both to SOUND human and ACT human seems improbable for a proof-of-concept. >>10400 >if it's a high level command which has man sub-commands and requires recognizing objects and planing motion, it's way more difficult. What happens in you text analyzer, and from there to the action, will be very complex. This feels like a more achievable goal than native control, at least to me. For lack of a better way of explaining it, BostonDynamics' Spot can scan its environment, create a model, and determine the best way to move around without falling over or bumping into things, and the end user can code movement in without having to manually tell each servo how to move and where to step- it's all abstracted away, and, without code, is simple enough to use with a gamepad-style controller. Granted, this is a bit of an unfair comparison since Spot is an engineering masterpiece with over 30yrs of development (and is indeed, very complex), but considering Spot-like bots exist, Replika-like AI exist, but robowaifus don't yet, I think this model is a good way of cross-breeding these two technologies together if direct-control isn't viable. At least to me, coding movement control in this way seems way easier than trying to wrap my head around AI and ML trying to learn to walk in a virtual environment then trying to translate virtual movement to IRL movement.
FYI, the "Also" in >>10428 doesn't refer to >>10427. It's just a new sub topic. >>10427 Welcome on board. I think it kind of fits in here, it certainly doesn't harm. Though, we also have the meta thread for general topic of robowaifus and the chatbot thread >>22 became more about cognition in general, while this here is for making diagrams, plans and such, at least in my understanding. In reality this is all kind of fuzzy and mixed together. So it's not such a big deal anyways. We started talking general development here recently, so we can figure out how to make plans for it. However, please try to avoid unnecessary empty lines in your posts. This is sometimes called 'Reddit spacing'. You're right about having abstractions. "Cognition" should call some specific movement as some high level function. We need to have all kinds of models interacting with each other to figure out what and how to do things. I'll take a look at your suggestion soon. >>10429 > chatbot ... learn the nitty gritty of each action down at the metal Nooo, of course not. But thinking, imagination and acting are interconnected. Think about the inner voice. So something very close to the conversational AI (/chatbot) needs to be able to call actions on a high level (simple commands) with some parameters like context. The part controlling the movements needs to look for other contexts like risks and obstructions, or rather have such information ahead of receiving any movement command. The system should always know which movements could be done, before they even happen, like defining safe zones every 500 milliseconds or so. >Full, direct-control of the waifubody by the AI would be cool, but.. The whole system is the robowaifu AI, not the chatbot (/conversational AI). I'm quite sure, humans don't plan movements with their center for speech to every detail. The parts here should be seen like specialized parts of the human brain. >coding movement control in this way seems way easier than trying to wrap my head around AI and ML trying to learn to walk in a virtual environment then trying to translate virtual movement to IRL movement I don't know myself how we'll figure it out eventually, yet. But I think you're making the wrong distinctions: One is about training in reality the other in simulations. Eventually we want both, the later for her to learn in her dreams, based on experiences in real life or thing she saw on TV. The other is about coding and ML. I think the first step is coding some basic movements, then using ML in reality or simulation to train all the little deviations and considerations of sensory data, then we have a model that takes commands from another part of the system and executes them, which regard to the situation and sensory data. Then she should analyze her experiences in simulations, while she has nothing else to do, or simply on a external server at home. Maybe that's to simple, and it will require even more parts, but that's the basic idea so far.
Open file (245.47 KB 1915x1443 getting_started.png)
>>10428 So, I was actually working on this. Here is the result, which is probably not the last version. I didn't post it in the prototype thread since we have this thread here on the topic of organization. Maybe the arrows in my diagram should go the other way, idk. I made it starting from a point of a beginner, which then finds paths to move along. It could be better to think of an endpoint and build it that way. I'm not sure. For now, I just publish it, before it's one more project which I don't finish and publish because something could be improved. Also, I watched this on my computer as a SVG file, with black background, but since I can't upload that format here, I upload it as a PNG file. Newer versions of PlantUML can use external resources to change the look, maybe mine can do this as well, but I don't know how yet. I want to use this program for getting a better overview over the whole topic we are covering, of course not by putting all in one diagram. It's quite complex already, just by covering the surface. PlantUML code @startuml (*) --> "Python" (*) --> "basic math" "Python" --> "basic algebra" "basic math" --> "statistics" "basic math" --> "basic algebra" "Python" ...> "concept: natural laguage processing" "concept: natural laguage processing" -right-> "NLTK" "NLTK" -right-> "machine learning" "linear algebra" --> "machine learning" "statistics" --> "machine learning" "machine learning" --> "deep learning" "machine learning" ..> "concept: graph databases" "deep learning" ..> "concept: graph databases" (*) --> "basic algebra" "basic algebra" --> "statistics" "basic algebra" --> "linear algebra" (*) ..> "concept: graph databases" "concept: graph databases" --> "RDF primer" "RDF primer" ...> "programming" "programming" --> "SparQL/Neo4J" "programming" ...> "concept: ontologies" "concept: ontologies" ...> "concept: knowledge graphs" "concept: ontologies" --> "SparQL/Neo4J" "SparQL/Neo4J" ...> "concept: ontologies" "concept: natural laguage processing" ...> "concept: graph databases" "concept: knowledge graphs" ...> "concept: natural laguage processing" (*) --> "electronics basics" "electronics basics" --> "Arduino (/embedded programming)" "Arduino (/embedded programming)" --> "actuators" sensors --> "Arduino (/embedded programming)" "energy systems" --> "Arduino (/embedded programming)" "electronics basics" --> "sensors" "electronics basics" --> "energy systems" "energy systems" ...> walking "programming" --> "CPP/Python/Lisp/Swift" "CPP/Python/Lisp/Swift" --> "<List of concepts>" (*) ..> "3D design" "3D design" --> CAD "3D design" --> sculpting "3D design" --> "clay modelling" "3D design" --> "3D model extraction" sculpting --> Blender sculpting --> Fusion3D CAD --> "Solvespace" CAD --> "Blender" CAD --> "Fusion3D" "Blender" ...> "3D printing" "Solvespace" ...> "3D printing" "Fusion3D" ...> "3D printing" "3D printing" ...> molding "clay modelling" ...> molding molding --> silicone molding --> "plastic resins" "clay modelling" --> silicone "clay modelling" --> "plastic resins" (*) ..> plastics plastics --> 3D printing plastics --> "thermoplastic modeling" plastics --> "plastic resins" (*) ..> "conversational AI" "conversational AI" ...> "response generation" "response generation" -->"AIML (scripted responses)" "conversational AI" ...> "concept: natural laguage processing" "conversational AI" ...> "text generators" "conversational AI" ...> "speech recognition" "conversational AI" ...> "speech sythesis" "text generators" --> "AIML (scripted responses)" "machine learning" --> "text generators" "text generators" --> "concept: natural laguage processing" programming --> "AIML (scripted responses)" "AIML (scripted responses)" ...> "concept: graph databases" (*) ..> "face design" "face design" --> "3D design" "face design" --> "generative networks" "deep learning" --> "generative networks" (*) ..> motion motion ...> "electronics basics" motion ...> "programming" simulation --> motion programming ...> simulation motion --> "actuators" "actuators" --> "dc motors" "actuators" --> "pneumatics/hydraulics" "actuators" --> "dielastic elastomers" "dc motors" ..> walking "pneumatics/hydraulics" ..> walking "dielastic elastomers" ..> walking motion ...> walking (*) ..> "computer vision" "computer vision" --> "object detection" "computer vision" ...> "electronics basics" "object detection" --> "object categorization" "machine learning" ...> "object detection" "concept: ontologies" ...> "object categorization" "computer vision" ...> "situational awareness" "situational awareness" --> navigation "computer vision" ...> navigation simulation ...> navigation navigation ...> walking (*) ..> skin skin --> silicone skin --> textiles silicone --> textiles silicone --> sensors textiles --> sensors (*) ..> skeleton skeleton --> plastics skeleton --> "3D design" skeleton --> metals "concept: knowledge graphs" ...> "advanced AI" simulation ...> "advanced AI" "advanced AI" --> "psychology/philosophy/cognition" @enduml
>>10670 >t's quite complex already, just by covering the surface. That it is, and no fault of yours Anon. That's rather a good first attempt at assembling a mindmap of sorts for robowaifu technicians Anon, and I'm glad you didn't just shelve it b/c it's not """perfect""" yet. On that topic of perfectionism, it's been the downfall of many would-be robowaifuists who have >tableflip.exe'd the entire deal simply b/c some little thing or other didn't work out just the way they envisioned. As your initial diagram amply brings to light, there are tonnes of subjects to hand when devising robowaifus. A mature outlook would suggest that if one thing isn't working out just right ATM, then just switch gears and move into another track for now. My own experience tells me that very often, while you're working on some different topic or other, that a flash of insight occurs relating back to an earlier roadblock. Many's the time I've jumped back into something from earlier after taking days (or even weeks!) of time to 'chew things over' in my mind. More often than not, I actually solve the issue successfully and can tick it off the list. I've even solved problems entirely in a dream, jumped up to work on it when I woke up and it worked! 'Forward momentum' (in the euphemistic sense) is a very, very important thing to maintain when you're tackling something expansive like this. "Every little helps" as King Aragorn would say, and before long you'll look back and see how far you've come if you just keep plodding away at it. >tl;dr The main point is simply to keep moving forward.
I also want to look into Meermaid: https://mermaid-js.github.io/mermaid/#/ This one translates something close to markdown into graphs of different kinds. It works within websites (nodejs) or on command line: Here the command line version, first line downloads 50+ MB modules and builds it, second one starts the help: https://github.com/mermaid-js/mermaid-cli, I used yarnpkg for that, which I had to install first. yarn add @mermaid-js/mermaid-cli ./node_modules/.bin/mmdc -h For now I got stuck with the installation, and wanted to write that I'm done for today. It needs some other stuff which takes time, and it seems to needs a program named Puppeteer which needs to install a whole instance of Chrom(ium?)?!? Whatever. Nodejs has a bad reputation. I wonder why. But then it finished after a fresh start. Not gonna test it today, though.
Open file (584.89 KB 1900x2236 StateOfRobowaifu.png)
>>10716 The command was actually yarnpkg, not yarn, btw. But that might depend on the distro or OS. Also, since I couldn't sleep anyways, I could at least upload these two. One is the file from >>10670 with some minor modifications and new colors, which I added in Dia. The other is a first draft for an overview over all open source robowaifus with more details to their current skills. It's meant to be posted on imageboards or social media. This one is completely made in Dia, so no code. I might try out Mermaid first, before working on that in Dia (if ever).
Open file (7.89 KB 287x329 simple-er.png)
Open file (26.31 KB 686x413 class.png)
Open file (6.07 KB 158x247 flow.png)
>>10716 I wonder what I should use to model certain things: - notes on how the human mind works - options to use in building some part, e.g. different options of cameras for building a vision system, but also put options into it like 'eyes with cameras' or 'cameras elsewhere' or '3d cameras' vs 'two webcams'. I could choose a flow, entity relationship, or class diagram. There are also others for other use cases: git, gantt, sequence and user journey diagrams.
>>10776 Some form of UML is obviously going to be a common choice Anon, but IMHO, ORM is much superior in both expressiveness, and in not locking you into a fixed-count relationship schema. (>>2303, >>2307, ...) BTW, nice chart about robowaifu progress Anon. I hope you flesh it out more fully at some point. >>10719
>>10787 I think Class diagrams might work best. I would look into ORM if it was supported by Mermaid. >>10719 might get fleshed out more at some point, but I would need to keep track of all the developments. If it was code, we could collaborate on it, but the code based approaches seem not to support pictures and dia doesn't support code import.
Open file (343.07 KB 426x590 Kosaka-san.png)
Open file (87.10 KB 680x1024 Medio Kosaka-san.jpg)
>>10719 Nice write-up you got there on Sophie anon :D But don't forget the queen of the robowaifus; Kokona Kosaka! She's made by Japanese company Speecys.Ltd, and runs on a Linux system. Apparently something called 'MOFI-OS ver3.0'. But I'm not sure if that's the name of the actual OS or just referring to a MoFi network (WiFi and Ethernet LAN with mobile phone and USB connectivity). Kosaka san is 155cm tall and can sing. She can also pose or dance by using .VPD (Vocaloid Pose Data) files from Miku Miku Dance. Basically, the Japanese have already done it. A perfect hard-shell robowaifu. (No offence to Sophie but I am just one dev). Kosaka san just needs better A.I. now!
>>10796 >A perfect hard-shell robowaifu Not quite perfect Anon. Not to denigrate Kokona Kosaka or her masters in the slightest, but she is still affixed to her base unit, and is basically heavy af. It's highly impressive as an achievement so far, and gives us all here something to strive for. But our ultimate goal should be reasonably inexpensive, power-efficient, mobile, autonomous gynoid companions. Just like in my Chinese Cartoon Documentaries. :^) All that aside, she is a marvelous robowaifu for sure.
>>10796 Such a kawaii outfit and pose!
>>10798 >Not quite perfect Anon. LOL true, sorry. I just go all Lord Katsumoto from 'The Last Samurai' when I see Kosaka-san singing and dancing.
>>10801 Kek, fair enough. It was a great moment!
>>10796 Thanks, and yes I'm going to put her into the next version.
>related crosspost >>1997

Report/Delete/Moderation Forms
Delete
Report

no cookies?