/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.


Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...

Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9
This thread is dedicated to the study, design, and engineering of a cute face for robots.
>>6024 That looks like he's improved the facial form of Erica? Seems like she has more appeal now.
>>6023 I kinda messed it up. It's suppose to be 2 for the neck, 2x2 for the arms, 2x2 for the legs, so 10 in total. A Raspberry Pi could probably fit in the head or in a backpack. I'm thinking a backpack will be a better idea because it'd be easier to dissipate heat and give better space for the batteries.
>>6022 Obviously these are multiple thousand dollar robots that can wrestle, but if you haven't come across this video already, you can take away some good animation ideas. I suppose a tiny, fluffier robowaifu would wobble more, making her even cuter. https://www.youtube.com/watch?v=AZMmYF4G278
>>6033 >It's suppose to be 2 for the neck, 2x2 for the arms, 2x2 for the legs, so 10 in total Actually, I think having just a single actuator is a better choice for a Fumo. You might want two for the neck (1 or fwd/bk, 1 for side-to-side), but I think just one each for the arms and legs would be good. It would be cute movement, and would work just fine for her form-factor.
>>6037 >1 for fwd/bk*
>>6037 That would work. I might try that for my first attempt. I'd like for them to eventually be able to walk and point at things though. I was thinking of adding 2 more for the torso, or even 3 if they can fit, so they can wobble and balance themselves.
>>6044 >so they can wobble and balance themselves. You mentioned the idea of giving her a backpack for batteries, etc. If you made the backpack's, well, back rigidly attached to her internal armature, then you could use tiny versions of these gyros (firmly fixed inside the backpacks) >>5645 for helping with balance. That might enable you to get away with just one internal actuator per hip joint.
>>6045 If these didn't get so dang hot, they might form the basis for a gyro system since they spin so fast. As it is though, they probably aren't usable for Fumos. >>4505
Open file (91.11 KB 800x600 face-muscles.jpg)
Sorry if this question doesn't make sense, I don't really know what I'm talking about (even though I've been lurking for over half a year). Would it be possible to use something like a system of porous dielectric elastomers as artificial muscles to simulate the mimic muscles (picrel)? I'm specifically wondering about the sensitivity of the material, which I know is relatively high, but I'm curious if it's sensitive enough that I can get super, super small increments of movement to try and nail natural facial expressions as best as possible.
>>6562 I know a little something about facial animation Anon, but not about the materials you mentioned. I found pic related and I'll skim it to see if I think I can add anything in response to your question. My from-the-hip is that yes (but it will take both meticulous craftsmanship in construction, and detailed control in the software design). >
Open file (1.65 MB 1270x903 1602918127297.png)
>>6563 I dunno if you've already read that, but I'll explain myself a bit further, DEs are part of a larger group of materials known as electroactive polymers, which are materials that change shape or size when exposed to an electrical field, and are used a lot in soft robotics as artificial muscle. It really caught my eyes, but I'm only interested in the facial muscles part of it, so I'm looking into different EAPs and systems that can do that. Off the top of my head the DEs looked most promising, due to a variety of things like low-latency, The main issues and questions I'm trying to get to the bottom of (I won't have the money for home experiments for at least a month or I'd just find out myself) is how sensitive the material is (how little I can deform the material) and whether or not it can hold shape then return to it's original shape. If you happen to know of any better suited EAPs or anything I'll be glad to hear it. I think developing believable facial animations, particularly of the mimic muscles, are by far the most important part of the physical side of things, so I thought I'd try to just mimic the muscles themselves instead of just the expressions. As long as they convey emotion in a human way, our unga bunga monke brains will be much more likely to accept them and escape the uncanny valley, the rest of the body is secondary to that goal. Any help and input is appreciated, anon
>>6574 I'm partly through the book so far, and atp I have no reason to revert my initial instinct: DE can indeed be used to simulate realistic facial deformation. A combination of more rigid thin-films (ligaments) and more porous (muscles) would have the best bio-mimicry. Both in design and result. But this would definitely be a years-long subproject for an autistically-dedicated individual working alone. This effort could certainly constitute the work of a good-sized team, and several papers' worth of research if absolute realism was the final goal. But the uncanny valley tends to drive /robowaifu/ towards a waifu-looking solution for most physical design work, including that of the facial systems. As a Character TD/Animator I can tell you that, interestingly, it's the small gap from the bottom of the upper eyelids to the top of the eyebrows that constitute the lion's-share of emotional believability within facial character animation. Probably ~75%+. The bulk of the remainder is related to mouth deformations. And ofc the contextual, sequential, timing of everything is also a fundamental part of making humanly-believable animation. Combining the body language and facial are more or less just about everything we mean by 'emotionally believable acting'. Other aspects (such as physique, costuming, environments, lighting, sound) while important, are simply ancillary to the fundamental art of acting itself. I'll be happy to work with you on this project work if you choose to try, but be aware I'm not a mechanical engineer. Hopefully EEs and MEs are joining /robowaifu/ , and they can help us out as well.
>>6562 Thanks for your input. I don' know enough about those, though I already have it on the radar. One thing you should always keep in mind is the lifespan of artificial muscles. That's the part I'm not sure about here. I do recall Youtube videos on how to build such muscles, though. Thanks for the reminder. However, another thing might be, that fictional anime waifus and fembots like Cameron (TSCC) have a rather limited ability in facial expressions, and are still attractive.
>>4496 >>4979 If screen tech is used for eyes, should it be matte or glare? I you want a shiny look on the eyes, it doesn't follow that shiny screen is the best choice, as the bright light bouncing of the surface emphasizes how flat it is. Information from sensors detecting the direction of light sources could be used to animate reflections as if the eyes weren't flat. Those who are open to weird fantasy-robot eyes on screens, consider this effect: https://twitter.com/jagarikin/status/1331409504953540613 https://twitter.com/sina_lana/status/1331049253280497670 In human beings, the direction one looks at shows attention, but also emotion (like looking at the ground when one is sorry or at the ceiling when thinking hard, and not because the floor or ceiling are where the action is). The pupils react to light and show emotion as well. Couldn't something like the effect above be used to make the eyes more expressive than the real thing, to emphasize and distinguish a bit this emotionally expressive side of eyes?
>>7431 Seems to me that a matte finish would be best. But since it's likely to be brightly illuminated w/ any modern screen tech the primary concern may turn out being properly adjusting the brightness/luminosity. Thanks for the links, is there an Twatter front-end alternative that usable, Nitter I think it is? As far as your question about expressiveness, I think we can borrow a whole lot of good prior art from the area of character animation. I think the answer is 'yes'.
Open file (267.38 KB 2048x1153 hamcat_mqq-04.jpg)
Open file (184.73 KB 1920x1080 hamcat_mqq-07.jpg)
Open file (208.12 KB 1920x1080 hamcat_mqq-03.jpg)
Open file (195.40 KB 1920x1080 hamcat_mqq-02.jpg)
>>7707 and some responses are related to face and skull. Te pics in the project dump thread came from https://nitter.net/hamcat_mqq same as the ones in this comment. I don't think the mechanism for the eyes anything special, but I can't tell for sure. There are also videos in the original source. Some anons like the eye design and eye lashes.
>>7725 Wow, this is very nice facial work Anon. Thanks for sharing it here. I think we could fairly easily be embedding LEDs inside the eyeball iris' and lots of other low-energy lighting possibilities for our robowaifus are conceivable tbh.
>>4549 >Spoiler: For faces sculpting seems to be better. What I'm wondering is, if you can take a bunch of professionally-sculpted face meshes, run them through a number of parameterization algorithms, and then come up with a few sound principles for beauty and appeal (Nordic women's facial forms say) that are basically automatable using a GAN-like approach after the parametric analysis has been done.
>>8230 >that are basically automatable Just to clarify, I mean that the design generation itself is automatable, not the topic of automating the robowaifu face after manufacture.
>>8231 I'm sure something like such a generator will be available at some point. However, I was rather thinking about one that would take in photos of a lot of pretty faces and then create new ones, then make a mesh out of the selected one. Two separate steps, which I think exist already on their own. Or a bunch of photos of a e.g. an actress would go in, taken from different angles, and then it would make a model. With enough models it could get better to put out something pretty. However, I was thinking that a user would give it the name of some public figures (actress or model) and it would generate some examples which would be close but not exactly the same. From those the user could then choose the preferred ones. Also making it more anime-like (Alita style) looking. I don't really think that some highly specific facial features of nordic women exist and that it make sense to go after that, though. Just make the eyes blue or green and the hair something between blonde and red or light brown, and maybe add some freckles. To fit in the huge eyes, the face would also have to change anyways... Also, if it goes after the pattern of an actress then it should put out what you want, if this includes something nordic then it should be in there. One of my ideas in that area was, that someone (group, company or person) could pay low-wage sculptors in poor countries to make models from a pool of pretty women (actresses and models). Then this would be a pool which could be used directly, or to train such generators on the creation of new but similar face meshes. After all, don't forget we need the skulls for the faces as well, so they fit onto it, and then a way to animate it with ease.
>>8232 You're correct that tools exist to do facial feature extraction to generate facial meshes (and other kinds) using multi-camera setups. An example of this technique (with a different kind of focus) is here >>1088 . But afaik, they are still fully proprietary and highly expensive. Addmittedly I haven't looked into this area for a year or two so maybe good opensauce systems exist now, let us hope so. Certainly the all-in-one '3D' cameras are more numerous to choose from now. And as you suggest, collections of images can potentially stand in proxy for such a method, though the 'registration' part of the process is both tedious and error-prone. >To fit in the huge eyes, the face would also have to change anyways Fair enough. As much as I like Alita, I think Rodriguez ( >>4502 ) went just a touch overboard with the design. But really, the critique is only because the rendering otherwise is so good and so realistic that it triggers a bit of a 'wut' in me (and others). As a counter-example, here's work by an artist that I think has found a near-perfect balance between kawaii-eyes and facial realism, though in an artistic style. > One pretty famous example of your pool of artists idea that has actually been carried by a very multi-talented guy is Ricky Ma's avatar effort >>153 . The very fact he created a small furor among women's advocacy groups as a 'creepy stalker' shows just how good a job he's done with it. Hopefully /robowaifu/ can manage to produce many, many examples that will do just as well (or even better)! :^) >--- -Update: Welp, I've attempted six times now to post this pic for you, obviously nuJulay won't cooperate atm. I'll try to do it again for you ITT later on Anon. The artist is named Ivant Alavera.
>>8236 >Ivan Talavera*
>>8240 >>8241 Ah, thanks. Cute, but I'm also fine with Alita. I think the bigger eyes might help if one looks rather young otherwise, to make sure she doesn't look human. >>8236 In software for users to edit I found this: Gradient Mesh Illustrator - https://youtu.be/JEJHk9VRAEQ and similar stuff. Gradient Mesh seems to be the term to look for. Also VoluMax might help: https://youtu.be/4XdoN2-8Dg8 However, my point is raher that we don't need to copy some specific face anyways, and this here is from 3 years ago, and a photo seems to be enough: https://youtu.be/u9UUWqVquXo - we only need to have it in a way that we could print molds from that. There's more on their site: http://www.hao-li.com/Hao_Li/Hao_Li_-_publications.html
Anyone seen this video? It does a decent breakdown of how unscientific most memes of the "uncanny valley" actually are. It may also help to clarify goals regarding robowaifu function and design (particularly of the head/face). https://www.youtube.com/watch?v=LKJBND_IRdI
>>8261 Yes, you probably have that from this site here, because I posted it (and on cuckchan and on other occasions).
>>8266 Seems the conclusion is that all robots are designed to perform a specific task or small set of tasks. Robowaifus are, for the most part designed to elicit a positive emotional response from the user. So I shouldn't try to make a robot in the image of a human that can carry out all of the tasks a human can, because I will just fail at both. Instead I think I'll just focus more on making a cute robowaifu who fulfills the function of reducing loneliness, rather than attempting to make one that can walk or perform complex motor actions like play sports etc. We already have some pretty high-quality chatbot software in the form of GPT-3, which I should be able to link to my current text-to- speech program- so I just need to complete a reasonable looking robowaifu body now.
>>8267 Being a cute waifu is the priority, but then improving her within these constraints. That's they way. I guess it will be rather so, that the more skilled ones will be more expensive.
Open file (268.63 KB 1240x897 MjcxNzYxOQ.jpeg)
Hey for those of us going for the human look, the use of prosthetics are always a good possibility. For the face, actual dental replacements (dentures?) could provide highly realistic looking teeth for a waifu's bright, sunshiney smile. :^)
Open file (191.52 KB 802x1202 summer-glau_02.jpg)
>>8326 Correct, they had this idea mentioned in the original board on 8chan. I never looked into it so far, where these are available, what they cost and where to get them. There must be some sources for training of dentists or something. They seem to have some kind of dolls with fake teeth to train on. Not sure how hard they are, though. I hope we get them made out of ceramics in some standardized sizes and don't need to build them on our own as well.
>>8331 Yea good thinking Anon. I bet we can source them somewhere on the cheap. Remember the mouth needs to be kept sanitized just like w/ humans so the source needs to be reputable. Remember your robowaifu will probably need to kiss you lots to stay happy! :^)
While not strictly RoboFace development per se, until we have a dedicated MOCAP thread this might be a good spot for this. I indirectly discovered this project today after looking into SingularityNet via Anon's post. >>8475 . It's a tool that finds facial landmarks in video. Helpful for thing like facial retargeting, etc. https://github.com/singnet/face-services
Open file (1.30 MB 2731x4096 IMG_20210325_183338.jpg)
Open file (585.69 KB 2731x4096 IMG_20210325_183326.jpg)
Open file (1.01 MB 2731x4096 IMG_20210325_183343.jpg)
This is what is possible today, though lightning might be relevant for the look. Let's see how they'll look after being shipped, and some customers take fotos and report on it. These are the Alita busts from Queens Studio. I already mentioned them here: >>8194
>>9260 Yep that's nice Anon, thanks for the updates.
So, things are moving forward here. New network creates toonified faces out of real or made up ones, also allowes mixing of two input pictures. >Our ReStyle scheme leverages the progress of recent StyleGAN encoders for inverting real images and introduces an iterative refinement mechansim that gradually converges to a more accurate inversion of real images in a self-correcting manner. https://yuval-alaluf.github.io/restyle-encoder/
>>10461 Oh, video: --write-sub --write-description https://youtu.be/9RzCZZBjlxM
>>10463 Thanks very much for taking the extra time to give a more full youtube-dl command to use Anon. Getting and keeping the description and subs will be important to anyone keeping personal archive of YT videos, once cancel-culture Marxism literally deletes anything/everything that could possibly have any bearing whatsoever on either robowaifu creation, or anything else that could possibly help men. Since the Lynxchan software adds an '[Embed]' into the text of the command, I always put such a command here on /robowaifu/ inside codeblocks, since the CSS here disables this embed tag. youtube-dl --write-description --write-auto-sub --sub-lang="en" https://youtu.be/9RzCZZBjlxM
>>10461 >>10463 That's really cool Anon. He's humorous to listen to as well, his enthusiasm is great.
That screen one looks nice for nanny.
https://www.thingiverse.com/thing:4865223 https://www.youtube.com/watch?v=8_wkbLL0fqM LED Matrix behind tinted plastic, cheap, easy, customizable
>>12938 Thanks, this might be something fitting very well into the basic idea of the board: Making affordable robowaifus, which don't need to look like humans but can be a bit more on the robot side.
>related crosspost (>>13020)
Open file (47.85 KB 600x414 uploads.jpg)
>>12938 That's like Rina Chans board. She's am autistic that uses a board to convey her emotions because making facial expressions is hard for her. Her board could be a really cute face for a robowaifu.
>>13560 this will be a thing in the 5-10 years all those kids growing up around masktards are going to be incapable of expressing emotions
>>13560 This could probably get quite expressive with a high enough resolution.
Open file (108.85 KB 335x640 mace_griffin_acolyte.jpg)
>>13560 >>13563 >>13565 I don't know. It just reminds me of the cultist NPCs from the game Mace Griffin. Using a display seems like a trade-off for using a real tangible face, if you want more face customization, but a emoticon-like face just seems like a really bad trade-off.
Simplified non-human faces go a long way towards bypassing the uncanny valley
>>12950 >Making affordable robowaifus, which don't need to look like humans but can be a bit more on the robot side. Agreed. >>14870 I don't personally find that particular face appealing, but I think in large part you're correct Anon. At the very least, I'd suggest we all seek for ways to maximize the expressive potential of, while minimizing the underlying complexities of, our robowaifu's face systems and structures.
Open file (42.60 KB 720x540 MaidroidMiao.jpg)
Open file (113.77 KB 750x1000 Sinobu.jpg)
>>14935 Could you provide examples of faces that work in 3D at human scale? Most figures are cute at figure scale but, they look uncanny at human scale.

Report/Delete/Moderation Forms

no cookies?