/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Bipedal Robot Locomotion General Robowaifu Technician 09/15/2019 (Sun) 05:57:42 No.237 [Reply] [Last]
We need to talk about bipedal locomotion. It's a complicated topic but one that has to be solved if we are ever to have satisfyingly believable robowaifus. There has surely already been a lot of research done on this topic, and we need to start digging and find the info that's out there. There are some projects that have at least partial robolegs solutions working, but none that I know of that look very realistic yet. We likely won't come up with some master-stroke of genius and solve everyone's problems here on /robowaifu/, but we should at least take a whack at it who knows? We certainly can't accomplish anything if we don't try.

I personally believe we should be keeping the weight out of the extremities – including the legs – while other anons think that we should add weight to the feet for balance. What's you're ideas anon? How do we control the gait? How do we adjust for different conditions? What if our robowaifu is carrying things? What about the legs during sex? Should we focus on the maths behind MIP (Mobile Inverted Pendulum), or is there a different approach that would be more straightforward? A mixture? Maybe we can even do weird stuff like reverse-knee legs that so many animals have. Robofaun waifu anyone? What about having something like heelys or bigger wheels in the feet as well?

I'm pretty sure if we just put our heads together and don't stop trying, we'll eventually arrive at least one good general solution to the problem of creating bipedal robot legs.

>tl;dr
ITT post good robowaifu legs

>tech diagrams sauce
www.youtube.com/watch?v=pgaEE27nsQw
www.goatstream.com/research/papers/SA2013/SA2013.pdf
80 posts and 38 images omitted.
>>10183 It would make sense to do this in a physics simulation. These were not only meant to train them, but to have one all the time for having awareness. Of course not simulating everything all the time, but when necessary. Also with simplified objects.
>>10186 Yes, that might be a good approach Anon, and for both of the enumerated needs too.
DrGuero made a new short video https://youtu.be/wxH3vQOz3JA, where he mentioned that his code is opensource now. His website http://ai2001.ifdef.jp is a bit slow right now, mb to much load after releasing the video. Also it's in Japanese, so some translation software will be necessary.
>>10343 Thanks Anon! Here's teh code page, I'm DLing now. http://ai2001.ifdef.jp/uvc/code.html >update OK, I've had a quick look at the code. It's old-style 'C with classes' (c.1985 C++) code. It seems to have only one major external dependency, Open Dynamics Engine. https://ode.org/ It's also W*ndows software. I'll try to sort out getting it to build on a modern C++ compiler on Linux sometime shortly, probably this coming week. Thanks Anon, he may be able to help our Bipedal Locomotion progress along.
>>10343 >>10344 update: OK, I've assembled the 4 code files into a workable project. I haven't set about trying to build it yet b/c dependencies. https://files.catbox.moe/wn95n1.7z However, if anyone would care to work on translating the embedded comments in the codefiles, then that would be a big help to me later on getting the code to run for us. TIA.

Open file (2.21 MB 1825x1229 chobit.png)
Robowaifu@home: Together We Are Powerful Robowaifu Technician 03/14/2021 (Sun) 09:30:29 No.8958 [Reply]
The biggest hurdle to making quick progress in AI is the lack of compute to train our own original models, yet there are millions of gamers with GPUs sitting around barely getting used, potentially an order of magnitude more compute than Google and Amazon combined. I've figured out a way though we can connect hundreds of computers together to train AI models by using gradient accumulation. How it works is by doing several training steps and accumulating the loss of each step, then dividing by the amount of accumulation steps taken before the optimizer step. If you have a batch size of 4 and do 256 training steps before an optimizer step, it's like training with a batch size of 1024. The larger the batch size and gradient accumulation steps are, the faster the model converges and the higher final accuracy it achieves. It's the most effective way to use a limited computing budget: https://www.youtube.com/watch?v=YX8LLYdQ-cA These training steps don't need to be calculated by a single computer but can be distributed across a network. A decent amount of bandwidth will be required to send the gradients each optimizer step and the training data. Deep gradient compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, but it's still going to be using about 0.5 MB download and upload to train something like GPT2-medium each optimizer step, or about 4-6 mbps on a Tesla T4. However, we can reduce this bandwidth by doing several training steps before contributing gradients to the server. Taking 25 would reduce it to about 0.2 mbps. Both slow and fast computers can contribute so long as they have the memory to hold the model. A slower computer might only send one training step whereas a fast one might contribute ten to the accumulated gradient. Some research needs to be done if a variable accumulation step size impacts training, but it could be adjusted as people join and leave the network. All that's needed to do this is a VPS. Contributors wanting anonymity can use proxies or TOR, but project owners will need to use VPNs with sufficient bandwidth and dedicated IPs if they wish that much anonymity. The VPS doesn't need an expensive GPU rental either. The fastest computer in the group could be chosen to calculate the optimizer steps. The server would just need to collect the gradients, decompress them, add them together, compress again and send the accumulated gradient to the computer calculating the optimizer step. Or if the optimizing computer has sufficient bandwidth, it could download all the compressed gradients from the server and calculate the accumulated gradient itself. My internet has 200 mbps download so it could potentially handle up to 1000 computers by keeping the bandwidth to 0.2 mbps. Attacks on the network could be mitigated by analyzing the gradients, discarding nonsensical ones and banning clients that send junk, or possibly by using PGP keys to create a pseudo-anonymous web of trust. Libraries for distributed training implementing DGC already exist, although not as advanced as I'm envisioning yet: https://github.com/synxlin/deep-gradient-compression I think this will also be a good way to get more people involved. Most people don't know enough about AI or robotics enough to help but if they can contribute their GPU to someone's robowaifu AI they like and watch her improve each day they will feel good about it and get more involved. At scale though some care will need to be taken that people don't agree to run dangerous code on their computers, either through a library that constructs the models from instructions or something else. And where the gradients are calculated does not matter. They could come from all kinds of hardware, platforms and software like PyTorch, Tensorflow or mlpack.
23 posts and 4 images omitted.
>>8990 >Starlink I don't really trust Elon with these things, look at Tesla's services. He definitely isn't /ourguy/ and will provide it at a cost of "freedom". > Once basic chat is solved people are going to expand their virtual waifus to perform other functions such as playing video games This has a ltot of potential but it's prob not a good idea, since there doesn't exist just one game and it would become a Herculian task to try to appease everyone. >composing music, drawing Again, it's very vague and doesn't necessarily help the project in of itself nor necessarily appease our interests. >debating We already have many bots who can do that and we know for a fact it's not that good of an idea. >summarizing research papers This is actually very realistic. There has been made a paper around this and an AI which can do it so it's very easy to implement (we just need codemonkeys for that) >searching the web Unless it can do it in a non-pozzed way, it's practically useless. >Alternatively, someone could create a marketplace where people can pay crypto for compute, but I'm not familiar with how to do that. Don't care about that too much, it's still not worth it to enough people for it. My suggestion would be to first make an AI that, just like GPT-3, can code programs that we tell it to. Once we automate the coding part, the rest will become much easier and we won't have to worry too much about it.
>>9086 >My suggestion would be to first make an AI that, just like GPT-3, can code programs that we tell it to. I'm quite skeptical of the fundamentals of that claim. I'm familiar with the article that made the statement, but he basically pinned the entire notion on a single addition operation. It just returned an exemplar from it's working corpus that performed that exact operation already. I would argue the lexicographic problem of the AI digesting the command "Waifu, add 3 plus 4 for me" is much more difficult. We'll probably get there in the future with software programming waifus, but it certainly isn't here just now.
>>9000 https://www.mlcathome.org/mlcathome/ I've managed to find a new project that's just popped up using BOINC. It seems to be something for "Un-black-box-ifying" a lot of traditional machine learning algorithms.
>>9105 Thanks! Just in case anyone, like me, can't get to it via Tor: https://web.archive.org/web/20201225185201/https://www.mlcathome.org/mlcathome/
Seems like this could be somewhat related to your goals OP. https://github.com/tensorflow/mesh Please forgive my naivete if this isn't related.

/robowaifu/meta-3: Spring Blossom Tree Chobitsu Board owner 02/11/2021 (Thu) 12:06:37 No.8492 [Reply] [Last]
/meta & QTDDTOT Note: New version of /robowaifu/ JSON archives available 210504 https://files.catbox.moe/zbgor1.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. -The TITS thread #1 is now unlocked and open for business (>>9709). -previous /meta: >>38 >>3108 -Library thread (good for locating topics/terms): >>7143 >note: there's also a searching tool coded right here for /robowaifu/ that provides crosslinks straight to posts on the board. it's named waifusearch, and the link to the software is provided inside the Library thread's OP. -Latest version of BUMP v0.2e

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/06/2021 (Thu) 08:40:38.
174 posts and 34 images omitted.
>>10326 The team running GPT-3 has chosen not to release that 'open-source' system "to protect the public". But somehow, (((strangely enough))), has managed to take everyone's work into it and monetize that instead. > As I understand it, there are other, non-Globohomo Big Tech/Gov efforts that have sprung up to combat this abuse, and are endeavoring to create effective alternatives.
>>10329 I read from other pages and some have mentioned that GPT-3 requires a NASA super computer to run including multiple GPU to make the performance acceptable, looks like its not such a good option then welp. >As I understand it, there are other, non-Globohomo Big Tech/Gov efforts that have sprung up to combat this abuse, and are endeavoring to create effective alternatives. What are those?
>>10326 >>10330 Correct, GPT-3 needs a whole server farm. Some anon here tries to build something smaller. But, keep in mind these are text generators anyways. It makes sense to feed all data into them, to anticipate what someone might say in response to something. Using it to have some vaguely specific output, requires to select the data going into the training. The responses coming out would still not make much sense, necessarily. We have two treads for those topics, btw: >>22 >>250 - Maybe we should go on there, though I think this also has been discussed already.
>>10329 There's GPT-Neo now that claims to outperform GPT-2: https://github.com/EleutherAI/gpt-neo It's basically a smaller version of GPT-3. I haven't had time to play around with it but it looks promising.
>>10332 >Maybe we should go on there, though I think this also has been discussed already. Good point Anon. I'll just pop over and leave crosslinks so researchers will see our little stub here too. >>10333 >digits confirm Neo That's encouraging Anon, thanks.

RoboWaifuBanners Robowaifu Technician 09/15/2019 (Sun) 10:29:19 No.252 [Reply] [Last]
This thread is for the sharing of robo waifu banners. As per the rules, fallow these requirements:

>banner requirements:

File size must be lower than 500 KB and dimensions are 300x100 exactly.

Allowed file formats are .jpg, .png, and .gif.
93 posts and 80 images omitted.
Open file (16.49 KB 300x100 robobanner7c.png)
A banner inspired on warhammer 40k.
>>10248 I like that design. Very evocative. AFAICT ADEPTUS is masculine, was this intentional? ADEPTA would be feminine, I think.
Open file (112.30 KB 370x712 lolicron.png)
>>10248 Good idea anon! We need a separate branch to those miserable puritans! If the Mechanicus wishes to outlaw "Abominable Intelligences" as heresy, then they can go worship their corpse emperor without any robowaifus. Pah!
>>10258 >was this intentional? Yes. We are the Adeti.
>>10313 >Adepti Why TH can't I delete my posts?

Open file (86.18 KB 300x299 digital_waifu.gif)
What happens to your robowaifu when you die? Robowaifu Technician 09/27/2019 (Fri) 10:47:48 No.829 [Reply]
Have any of you considered what will happen to your waifu after you die? How would you prepare her to face the world alone? Will she even be able to take care of her own needs when you're gone?

Have you considered the possibility that she might be so unwilling to let you die that you'll wake up in a robot body yourself one day? Would you resent her for not allowing you to die?
31 posts and 9 images omitted.
Open file (1.33 MB 900x1259 1612989524602.jpg)
>>9494 its probably closer to ideology but I agree
Open file (2.92 MB 1920x1080 Seraphim City.png)
>>9494 Funny you should say that anon! I am building a place called "Seraphim City" on Cities Skylines. It is supposed to have a solarpunk/utopian aesthetic and it includes a structure called "Cathedral of the Robowaifus". Except instead of burning incense and praying I reckon that mostly programming and overclocking goes on in there. But there is a huge electronic synth-organ and a Vocaloid robot choir that sings to mark special occasions.
>>9497 fuck that sounds like fun, i should do that
>>1536 >they become a beloved and valuable family heirloom and are passed from generation to generation that sounds a bit like greek daemons
>>10308 Personally, I got the idea from the movie Bicentennial Man (>>10310).

Modern C++ Group Learning Thread Chobitsu Board owner 08/31/2020 (Mon) 01:00:05 No.4895 [Reply]
In the same spirit as the Embedded Programming Group Learning Thread 001 >>367 , I'd like to start a thread for us all that is dedicated to helping /robowaifu/ get up to speed with the C++ programming language. The modern form of C++ isn't actually all that difficult to get started with, as we'll hopefully demonstrate here. We'll basically stick with the C++17 dialect of the language, since that's very widely available on compilers today. There are a couple of books about the language of interest here, namely Bjarne Stroustrup's Programming -- Principles and Practice Using C++ (Second edition) https://stroustrup.com/programming.html , and A Tour of C++ (Second edition) https://stroustrup.com/tour2.html . The former is a thick textbook intended for college freshmen with no programming experience whatsoever, and the latter is a fairly thin book intended to get experienced developers up to speed quickly with modern C++. We'll use material from both ITT. During the progress, I'll discuss a variety of other topics somewhat related to the language, like compiler optimizations, hardware constraints and behavior, and generally anything I think will be of interest to /robowaifu/. Some of this can be rather technical in nature, so just ask if something isn't quite clear to you. We'll be using an actual small embedded computer to do a lot of the development work on, so some of these other topics will kind of naturally flow from that approach to things. We'll talk in particular about data structures and algorithms from the modern C++ perspective. There are whole families of problems in computer science that the language makes ridiculously simple today to perform effectively at an industrial scale, and we'll touch on a few of these in regards to creating robowaifus that are robust and reliable. >NOTES: -Any meta thoughts or suggestions for the thread I'd suggest posting in our general /meta thread >>3108 , and I'll try to integrate them into this thread if I can do so effectively. -I'll likely (re)edit my posts here where I see places for improvement, etc. In accord with my general approach over the last few months, I'll also make a brief note as to the nature of the edit. -The basic goal is to create a thread that can serve as a general reference to C++ for beginners, and to help flesh out the C++ tutorial section of the RDD >>3001 . So, let's get started /robowaifu/.
43 posts and 58 images omitted.
Open file (65.75 KB 1113x752 juCi++_181.png)
Open file (65.07 KB 1113x752 juCi++_180.png)
Open file (68.47 KB 1113x752 juCi++_182.png)
Open file (67.29 KB 1113x752 juCi++_183.png)
Open file (99.95 KB 1112x760 juCi++_184.png)
>>6922 OK, now that we've wrung our simple Robowaifu class with it's sole member function out a bit, let's lift both it and the sayings std::map out into their own files, and we can begin using them outside our test system. Create two new files in our project 'Robowaifu.hpp' & 'sayings.hpp', (File > New File). Cut & paste the Robowaifu class into it's own file, and the sayings std::map into it's own file. You'll also need to add the appropriate C++ library includes for each file as well. Eg: #include <cstdint> #include <map> #include <string> > #1 #2 Since we've moved the sayings std::map off into it's own file, we'll need to include it into the Robowaifu.hpp file so the class itself can find the sayings data. #include "sayings.hpp" > #3 The test file now just contains the test (as we'd like), but we need to add an #include for the new Robowaifu.hpp file so the test can find the new location of the class itself. #include "Robowaifu.hpp" > #4

Message too long. Click here to view full text.

Open file (55.46 KB 444x614 C.jpg)
Because I am a programming brainlet, I'm trying my hand at some C. The first exercise is a bugger, and my brain was like "Duurrr Fahrenheit to Celsius then Celsius to Fahrenheit then Fahrenus to Celsyheit and Fahrycels to Heityfahr?" However, once they introduce the 'for' statement I began to realise that this stuff has a lot of potential. Sure, it took me four hours to figure out how to code two temperature conversion charts, but once they are done, just changing a couple of numbers lets you convert between any temperature in those units! Also, the whole calc can be reversed by only changing three characters. At first I was like -_- But then I was like O_0
>>10301 Congratulations, Anon. You've opened up a whole new world. You can create entire universes out of your dreams now if you just keep moving forward. Programming with serious intent at the level of C is easily one of the most creative things you can do as a human being IMO. And C++ is unironically even better! This isn't a C class per se but it's pretty likely I can help you out with things if you get stuck. Just give me a shout.
>>10302 Will do OP, thanks! I'd heard that learning C is best before starting on C++, so I'll just keep on plodding. After all, if your waifu speaks Thai or Russian, then it's best to learn those languages. And if your waifu speaks C/C++...

Prototypes and failures Robowaifu Technician 09/18/2019 (Wed) 11:51:30 No.418 [Reply] [Last]
Post your prototypes and failures.

I'll start with a failed mechanism, may we build upon each others ideas.
73 posts and 67 images omitted.
>>10292 Ahh, good detective work.
>>10291 >I worked a bit on the chest, to put in spaces for ribs in another material and was also starting to work on a long term project of building the bones of a hand out of layers, which could then be PCBs (electronics) with sensors, plastics for the form and metal parts for strength. I've thought often about the field of Neuromorphic computing, specifically as it relates to designing/engineering robowaifus. Using structural and other ancillary parts embedded with sensors, batteries, microcontrollers, wiring, electronics parts, etc., right inside the structural and actuator components is not only very bio-mimetic in design essence, it also is very likely to help bring the extreme high-performance characteristics of neuromorphics to the table. For example, embedding temperature sensors directly within the finger bones, and also keeping the robowaifu's self-protection 'sense/react response cycle' to pull away from the heat, say, all 'short-circuited' locally right inside a simplified hand-local electronics/microcontroller/actuator system. This design approach can allow the response times for such a system to be very fast relative to a more traditional, hierarchically-networked command & control mechanisms. Basically, in a somewhat similar way to biologic adrenergic nervous system response mechanisms, you want to push the 'computation' for such a system out to the edges of the physical structure, and not be so dependent with always 'phoning home'' first to the higher-level computation systems of the robowaifu's 'mind'. This latter approach encompasses costly communications and other delays. Not that the signals wouldn't be sent on their way 'back up the chain' though. You definitely want the ability of higher-level control to override lower-level ones when needed. Forging ahead into dangerous environments to protect her master for instance, even when doing so conflicts with the most basic of self-preservation dictums. This round-trip would hopefully be completed within milliseconds (vs. the hopefully microseconds-level desired for pure local response times). My apologies for my probably confusing writing here Anon. This is a complicated topic and it's difficult for me to describe it concisely.
>>10297 As an additional thought on the specific example of HOT! PULL HAND AWAY IMMEDIATELY! example, the control devices could perhaps either open, or reuse, an emergency response communications channel up to further-up actuator systems in the robowaifu's skeletal chain. So for example, the hand-local would attempt to instantly flex fingers back, but then emergency-response channels can be opened to the wrist, elbow, shoulder, and torso actuators, all in a tiered-priority chain, to enable fully pulling the hand entirely away from the danger, same as we ourselves would do accidentally touching a hot iron for instance. Each of these chained-actuators would quickly add their own kinematic dynamic in the movement, and the effect would be propagating and progressive. The idea behind the 'emergency response' is that the higher-level analysis would be bypassed in a first-order response time, simply to quickly save the robowaifu from immediate damage.
>>10298 One additional thing that will need to be solved for this hypothetical situation. As we grow up, our entire physical being develops a kind of physical awareness that let's us intuitively discern where sensations are coming from in our body by mere touch, and usually more or less instantly. Vision and audio, for instance, are not needed to know you've just touched a /comfy/ soft blanket, or a cold ice cube spilled onto the counter. And not only do you recognize immediately these kinds of sensorial cues basically immediately, you also know where (to a first approximation) the touched item of interest is located, relative to your general body position. Again this is all instinctive to us, and happens 'automatically' with little attention needed for most cases to figure these things out. Back to the HOT! emergency response, the robowaifu's system will need some kind of touch location-finder mechanism so she knows instantly where the hot plate is, and which way to yank her hand back out of danger. If this isn't done accurately, she could make a clumsy move in the reaction, and possibly damage herself, you, or something else. Again, this is something we all develop instinctively as we grow up, but for us as designers and engineers we'll have to solve this kind of thing explicitly. I'd guess that a first-approximation approach would be to keep a general sense of all the items in her local body space area's surface normal. This should at the least give her the direction to quickly move out of the contact danger (ie, out along the surface normal of the object and away). This situational-awareness solution needs to account that this 'normal-map' of her environment is dynamic, as both she and the elements in her environment are potentially in motion with respect to each other. This is really quite a remarkable domain to tackle from a systems-engineering perspective. Now that I've been applying myself to consider some of the many things all needed, most other design & engineering endeavors seem rather boring to me now. :^)
>>10299 Also, once you check my digits another thing we might do is develop a sort of 'contact-pad volumetric triangulation' sensor model. The idea is you have many tiny pressure, etc. pads embedded into the robowaifu's 'skin'. Whenever she touched something, and approximation of it's shape (and by implication, it's surface normals) can be quickly simulated in her world model. For example, if 18 different pads on two of her fingertips all register a contact, then based on the kinematic/skeletal/etc body model simulation of her current physical position, then she can 'triangulate' the surface shape of that object at it's contact points with her fingers. Again, all instinctive for us...but for her will need to be explicitly worked out in advance by trial and error during design.

/robowaifu/ + /monster/, its benefits, and the uncanny valley Robowaifu Technician 05/03/2021 (Mon) 14:02:40 No.10259 [Reply]
Discussing the potential benefits of creating monster girls via robotics instead of 1 to 1 replicas of humans and what parts can be substituted to get them in production as soon as possible. Firstly is the fact that many of the animal parts that could be substituted for human one are much simpler to work with than the human appendages, which have a ton of bones and complex joints in the hands and feet, My primary example of this is bird/harpy species (image 1), which have relatively simple structures and much less complexity in the hands and feet. For example, the wings of the bird species typically only have around three or four joints total, compared to the twenty-seven in the human hand, while the legs typically only have two or three, compared to the thirty-three in the human foot. As you can guess, having to work with a tenth of the bones and joints and opposable thumbs and all that shit makes things incredibly easier to work with. And while I used bird species as an example, the same argument could be put forward for MG species with paws and other more simplistic appendages, such as Bogey (image 2) and insect hybrids (image 3). Secondly is intentionally making it appear to not be human in order to circumvent the uncanny valley. It's incredibly difficult to make completely convincing human movement, and one of the simplest ways around that is just to suspend the need for it entirely. We as humans are incredibly sensitive to the uncanny valley of our own species, even something as benign as a prosthetic limb can trigger it, but if we were to create something that we don't expect to move in such a way, it's theoretically entirely possible to just not have to deal with it (for the extremities part of it, anyways), leaving more time to focus on other aspects, such as the face. On the topic of face, so too could slight things be substituted there (again for instance, insect girls), in order to draw attention away from the uncanny valley until technology is advanced enough that said uncanny valley can be eliminated entirely. These possibilities, while certainly not to the taste of every anon, could be used as a way to accelerate production to the point that it picks up investors and begins to breed competition and innovation among people with wayyyyyyy more money and manpower than us, which I believe should be the endgoal for this board as a whole. . Any ideas or input is sincerely appreciated.
2 posts and 3 images omitted.
>>10260 That was a fantastic video, made a lot of sense, will look more into that point of view of emotion over likeness. Another tangentially related idea I had to cutting costs/production time, while not necessarily related to monster girls, is skipping over the voice synthesis and chatting capabilities for pure voice/emotional recognition and expressiveness. A mute, until sufficient software can be developed, but one that understands and can communicate through body and facial language. So being pointed into the direction of expressiveness opposed to pure likeness is actually a big helper on that front, thank you >>10266 The one of the best parts about deciding to create a 'monster girl' is that you can just throw on unnatural stuff like tails or bird feet (you could even go as far as creating a chimera-type if necessary), as you've noticed, to help push things such as mobility and balance farther easier than it would be with human appendages, which is the primary reason I'm aiming for it myself. As long as I can create something I can love and semi-interact with (which likely other people would be able to), the parts can be whatever. Monsterization out of utilitarianism or something like that
>>10266 >that don't realize art is hard, especially expressing it through mechanical engineering and AI. related. (>>10257)
>>10267 Thank you for spoilering the /monster/girl. I think it probably would be wise for us to address it here as a 'containment thread' topic generally-speaking.
>>10266 > that capp < skindeep/10
>>10260 *ALPHRED2

Open file (394.51 KB 1731x2095 1614545368145.jpg)
How to invest in Robo Waifus? Robowaifu Technician 03/07/2021 (Sun) 20:57:45 No.8841 [Reply]
Hi /robowaifu/, I've stumbled here after doing some research into creating a robowaifu stock portfolio. My thinking was I am never going to be able to build a robowaifu from scratch but will buy one when they become available. I want to contribute somehow to their development so I am creating a stock portfolio where I buy only stocks that I believe are working towards robo waifus either literally or by just advancing the tech that would be needed. GPUs, Semiconductors, AI, VR, Robotics, Batterys, Silicon. My current portfolio: >NVDA - Nvidia - GPUs used in AI / Machine Learning >AMD >TSM - TSMC. The world's biggest and most advance semiconductor foundry. >INTC - Intel >ABB - ABB Ltd - Swedish-Swiss company in automation technology, robotics, heavy electrical equipment. >IRBT - iRobot - Home Robotics >MSFT - Microsoft - They are investing heavy into AI and have one of the largest AI chat bots "Xiaoice". >TER - Teradyne >IBM Then buying into the japanese stocks via ETFs such as BOTZ (Japanese heavy robotics, lower fees) and ROBO (more diverse, focusing more on automation tech). Individual stocks are better but I have to cope through ETFs to get exposure to japanese stocks such as YASKY ( Japanese manufacturer of servos, motion controllers, AC motor drives, switches and industrial robots).

Message too long. Click here to view full text.

32 posts and 2 images omitted.
I think the RC is a good idea. Because if we want more people in the cause or at least help bridge our ways to robowaifus. >>546 Has the best idea on how we can bring people to robots and robowaifus. But that might be off topic.
>>9529 >But that might be off topic. I suppose it's only fair to everyone that this question should be clarified in detail. Since you've brought the topic up and I'm motivated enough to, here seems as good a spot as any (we're generally already well off-topic ITT regardless). -The subject of a thread is the first step in evaluating that Anon. In this specific example that would be: How to invest in Robo Waifus? -The second step to answering that question is the content of the OP. If it's a quality OP, then the subject will be properly expanded upon to help everyone else understand OP's intents better. Again, in this specific example, it's apparent the OP is interested only in stocks and the financials markets. Any secondary considerations thereto appear only, well, secondary to him. The quality of a thread's OP is certainly not a given. Some are better, some are worse. Often, the quality of an OP is directly proportional to the word-count of the OP text itself, the longer the better. -The third step requires more effort on an anon's part to decide; namely, actually reading the thread itself. By following the flavor of all the other -- at least tangentially on-topic -- posts in a given thread, one can often discern properly where the general consensus is going, and if any given following post would therefore be along that trend or not. >--- So, following this pattern in evaluating the on-topic context of your post: 1. Doesn't seem really concerned with 'Investing in Robowaifus' (actually, the stock market). A No. 2. See above, since OP's idea of investing was strictly about the markets. Again, a No. 3. Well, the thread itself was quickly derailed into ongoing discussions about RC'ing as a fun hobby, and being a field that has at least a tangential relationship to creating robowaifus. The investment aspect was plainly only a secondary consideration, if at all, in these numerous posts. The thread itself is therefore only a somewhat watered-down, off-topic thread at this stage. A Maybe.

Message too long. Click here to view full text.

Open file (56.08 KB 547x960 IMG_20210307_153723.jpg)
Here's an article on BOTZ and ROBO ETFs. I didn't read it completely, I could imagine to be interested later, I do not recommend anything, but especially not to invest all of one's money into something for the reason of being enthusiastic about it. On the other hand, following news on some topic and being interested might help to judge the outlook of some industry better. https://www.thestreet.com/etffocus/.amp/trade-ideas/botz-robo-best-robotics-artificial-intelligence-etf-2021
>>10269 >On the other hand, following news on some topic and being interested might help to judge the outlook of some industry better. If the goal is basically just to acquire wealth in a kind of 'everyman' way, then I'd suggest ignoring robowaifus entirely, and simply 'riding the curve' of daytrading. Our friend Gunship over on /f/ discussed this topic in general. Personally I see our calling as much higher than simply cashing in on the market...
>>10271 I'm into Crypto, profiting from the new financial system. I agree to not get caught up on one topic like robowaifus. In every case, as someone who wants to do something else it's best to speculate long term, not doing daytrading, may it be Crypto, S&P or themed ETF like the mentioned ones. Whatever, I posted the link from >>10269 here because we already have a thread for it.

Who wouldn't hug a kiwi. Robowaifu Technician 09/11/2019 (Wed) 01:58:04 No.104 [Reply] [Last]
Howdy, I'm planning on manufacturing and selling companion kiwi robot girls. Why kiwi girls? Robot arms are hard. What would you prefer to hug, a 1 m or 1.5 m robot? Blue and white or red and black? She'll be ultra light around 10 to 15 kg Max with self balancing functionality. Cost is dependent on size, 1000 for 1 m or 1500 for 1.5 m. I'm but one waifugineer, so I'm going to setup around one model to self produce. If things go well, costs can go down and more models can be made. Hopefully with arms someday.
73 posts and 44 images omitted.
Open file (36.27 KB 173x253 1619851210446wew.jpg)
this thread is kinda dead but I'd recommend using your time for voice recognition and endearing expressiveness, it'd be a lot simpler and less time consuming/realistic than getting a good voice output program going, and it being bad could even make it worse so maybe some little simple wings, some legs, and a cute face, with voice recognition and facial/body expressions, but mute that functionality will of course be added later, but much more important than functionality upon base release is the initial push into more people's radar to bring in more money, innovation, and competition, kind like with VR headsets
>>10227 Good points Anon.
>>10227 (Part One) To further elaborate on this idea, and then the technicalities of it and its relative benefits to a completely human body, take a look at this harpy (image one, very very cute, artist is rnd.jpg) and imagine that you want to build something along the lines of this model. The biggest and most apparent differences are, of course, the wings and the legs, which you may think would complicate the build further, but it's the contrary, as it would eliminate; 1. The needs for human hands and feet, which have just a fuckin shitload of bones in them, which in turn makes for much more complex construction. for instance, a single bird's wing (image two) only have three joints from the main connector at the 'shoulder', compared to humans, who have 27 in just a single hand, a much smaller and dense space. The legs (image three) have even less, with a single 'ankle' joint and a 'knee' joint, although depending on the style of feet for the bird species the toe phalanges (up to five a toe in some species!) may give some trouble, but that could be circumvented by simply creating the foot of a species with less of them lol So not only would the construction be much easy, but it would have a higher relative quality. 2. The problem of the uncanny valley that could be triggered by limbs and/or appendages trying to perfectly resemble those of humans and failing. We have such a response to the uncanny valley because we're socialized with humans to notice even the slightest thing that's off, but we don't have nearly the same capability with other species, on top of the anatomy of certain ones being much easier to deal with. Both those points are pretty applicaple to several different species of monster girls as a whole, and make them, in my opinion, a much more viable alternative in this stage of robowaifus, much in the same way some anons push for RWs that go out of their way to not appear human while still being endearing. (shoutout to /monster/) (Part Two) So on the other side of this proposal is the actual interaction and how such things could be done for max 'oh god oh fuck I wanna cuddle you'. Both for capitalistic budgeting/marketing and also because I would want one for myself.

Message too long. Click here to view full text.

>>10237 Very nice post Anon. I liked the details you went into about the bones/joints, and the /comfy/ homelife scenario you painted of Anon and his RoboPetgirl . Just like as seems very commonplace in my favorite Chinese Cartoons on the subject, and as you brought up too, some way to solidly imprint a waifu (whether RW or Petgirl) onto her master alone needs to be solved. The obvious idea is some type of bioinformatics scanning system for the Anon to use. I'd be hesitant to use just one method -- say voice -- alone. Too easy to mimic. In general I'd say emotiveness is a very fundamental part of Character Appeal (one of Frank & Ollie's 12 Principles) , and is something we all need to be creative about addressing well. Also, the eye color-shift thing is a nice idea.
>>10237 Glad I found your post anon. A sub one meter waddling very fluffy harpy petwife that makes cute squeaks sounds really nice. Will save your "features list" for reference. As for recognizing her master, we can start really simple and have an RF (radio frequency tag) ring which the owner wears, and the robot will just triangulate the location of that ring. Then only voice or facial recognition if the system is secure and not in the wrong hands.

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports)

no cookies?