/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.


Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...

Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9
This thread is dedicated to the study, design, and engineering of a cute face for robots.
Open file (23.65 KB 318x396 13167823.jpg)
>robowaifu face bread
Just finished the book about Hanson's Philip K. Dick android head. Writing was meh imo, but the info is hard to find elsewhere.
Open file (84.58 KB 614x588 IMG_20200702_020934.jpg)
Open file (43.29 KB 680x494 IMG_20200516_033001.jpg)
I mentioned Solenoids and a similar mechanism here: >>4364 and >>4447 (same thread in actuators). I think for facial expressions these might be usefull, to pull strings from inside the skull, which would be connected to the silicone skin of the face. Not sure if the tongue belongs to the face thread, but I'd guess so. Tongue thread in Dollsforum: https://dollforum.com/forum/viewtopic.php?f=6&t=128124&sid=44113180fc656eb7aa41381a0ce12d02 They had some idea had as well, using a little geared motor for rotation. However, this will probably not be enough. Faster left/right movements and bending might be solved with air pressure or the solenoid-like mechanism in >>4447. In and out maybe with solenoids. Sculpting: Has anyone tested different kind of programs? Or only Blender? I've got tools to do it in clay, but no monsters clay yet. Is this a waste of time anyways? Is there a software which can be controlled by commandline, so a neural network could be hocked up to it? Well, I put this on my watch later list: https://youtu.be/GetUbVV89t8 though that like vide number 600 there... Last but not least, I'd like to recommend to look into Disney Research Hub, not only for faces, but they're a lot into animatronics, including faces. Example: https://youtu.be/qeEqQCWbj4Q
>>4493 >Not sure if the tongue belongs to the face thread, but I'd guess so. Sure absolutely. I'd say the only part of the face that probably needs it's own thread are eyes (vision thread). >Sculpting: Has anyone tested different kind of programs? Or only Blender? I've had a fair amount of experience modeling in Maya, but it's been a while. >Disney research Yes, as much as I loathe the company now they have done some top-tier research in a lot of fields related to character-development and animation.
>>9 OP's pics do a pretty good job giving examples of different face types as well as general designs. What I'm curious about is how human the aim should be? Having a more mechanical face or even robo-faces would be a far different task to making something that looks genuinely human, all being their own very different projects that would likely depend on the overall aesthetic design of the robowaifu in question. Having a more robotic face and not aiming for accuracy would be relatively easier to do, as a more human face has a lot to consider. That isn't to say that a human face isn't an option, just that it would take far more design. A human face is given its shape primarily by bone structure and cartilage, but also by muscle and fat. Expression is made not only by parts of the face being pulled, but also by the change in those muscle's shape as they contract, although many muscles that operate the face are mainly located in the neck. I haven't counted, but from what I can find there are over thirty of these muscles. I think it will be a fun design challenge, and is definitely possible, but may be difficult to do with the accuracy required. Of course, without this accuracy we run into the uncanny valley. Since humans are inherently programmed to recognize and read facial expressions, small errors in motion will be extremely noticeable and uncanny. For this reason, a more robotic or simplified face may be necessary for the early models. This means screens that display a face, robot faces that aren't made to look much like faces, or masks that appear human without the pretense of motion or expression.
Open file (467.40 KB 640x394 IMG_20200617_232457.jpg)
>>4496 I agree with many of your arguments. There need to be different approaches, for everyones preference and wallet. However, simple plastic faces are probably simple to build if you can extrude or print parts and then sand and smooth them. Same might be true for doll faces as well. These are rather side projects on the way to the more expressive ones. I don't get the uncanny valley feeling from looking at Erica: https://youtu.be/CPWS69ERzeU and I don't think some expressions would change that. Of course, they will be locked to only resemble human ones in normal situations. I'm for trial and error in that regard.
>>4497 >There need to be different approaches, for everyones preference and wallet. Exactly. A wide range of approaches will be ideal to cover everyone's particular needs and constraints. >I don't get the uncanny valley feeling from looking at Erica In some videos she seems a bit off, but that one in particular is a nice example of the higher end of the spectrum, far closer to a more realistic face than most attempts. That said, it still felt off to me. I think it is particularly noticeable with the lack of expression, but the resting face is reasonable. Comparing the expressiveness of her face to the face of who she is talking to illustrates a clear difference in levels of expression, but a completely reasonable and understandable one. Appearing cold is much better than appearing uncanny. It is also important to note that people have a range in their own interpretation of emotion, known as emotional intelligence. I can't say for sure, but people with a lower ability to perceive emotional responses based on facial expressions will probably be less bothered by a model that is trying to appear human and reflect human emotions, which makes things a bit easier. Some people will be put off by anything less than perfect, some people won't care much at all, and most will be somewhere in the middle, at least when it comes to a robot designed to be similar to a human.
>>4498 > Appearing cold is much better than appearing uncanny. This can be quite attractive. Summer Glau did this in Terminator SCC as Cameron (Terminator). Amazing. In case of Erica the fact that she's Japanese might also play a role. Whatever, for me this would be a amazing level for my personal waifu. The only real difference I'll be going for are bigger eyes, like Alita in her live action movie. Looks great, makes it easier, and helps with legal excuses if she's looking a bit young.
Open file (167.57 KB 915x913 Robert_Rodriguez_2019.jpg)
>>4501 >Alita in her live action movie Rodriguez really knocked that adaptation out of the park IMO, and the on-screen animu Alita put her 3DPD reference actress to shame visually. Looking forward the next installment from this production team.
Open file (24.61 KB 600x338 doll harmony 1.jpg)
>>4497 Okay, I watched this video: https://youtu.be/DA9PQlJ1ixg WTF? The mechanism needs the whole skull. Got a similar impression from other bots, like Realdolls (picture), where a mask is put on the skull. Sophia also seems to have her head full of gears. This has to improve. Noise level as well. However, make InMoov head (open source) much more feminin, and you've got your plastic face.
>>4518 "Gotta keep in mind that it's implied that some links are in front/behind to get the rotation. the linkage simulators show the actuating travel path as a line. I think a 'Four Bar Linkage Mechanism' is most appropriate for moving a jaw. The extended 'red line' shows how the end material moves; it would be the font of the lower jaw. Mechanisms require good perceptual skills to visualise what motion you're designing for in your system. If you need, rubherkitty, I could sketch how you'd attach this to your jaw mod" ... "I checked out the 4 bar linkage and they are using a continuous rotation motor. I figured on using a partial rotating servo and requiring less than 45 degree of rotation back & forth. I can't see the mouth needing more than 3/4" opening to make it appear to be talking. Maybe 1" for moaning. Looks like RD uses twin servo's, but attached to upper palate? I assume the lower jaw is actuated at the back" Via: https://www.dollforum.com/forum/viewtopic.php?f=6&t=104712 They're talking about this: Four Bar Linkage Mechanism: https://www.mekanizmalar.com/four-bar-mechanism.html
Open file (77.57 KB 600x787 IMG_20200705_050740.jpg)
Poly Modeling vs Sculpting vs Physical Sculpting? Here a video which compares the first two methods: https://youtu.be/EvzQYzczUH8 - Spoiler: For faces sculpting seems to be better. 3D model from photos with NN: https://youtu.be/JWqGr5juB_k https://youtu.be/JtK4cTLlUko https://youtu.be/uYOL6qg1NuU (free website service seems not to work anymore) Website: http://aaronsplace.co.uk/ MeshLab: https://www.meshlab.net/ Blender: https://youtu.be/5WH7s-IPIeM There are of course much more videos on this topic and always new options. Hardware requirements for sculpting: https://youtu.be/G-90qEJAVkU - 2y ago https://youtu.be/m-nxkUzPTSM I didn't put the comment into the thread for body modelling >>415, bc this here is more about faces specifically. However, there might be some other usefull tips and I'll link back from there.
>>4549 Interesting topic to me Anon thanks. I'm toying around with ideas for programmatic content generation r/n. Basically creating (potentially complex) 3D geometric mesh data, etc., from much simpler descriptions. Basically, simple scripting to create 3D models. Robowaifus included, ofc.
>>4553 Would be great, do you know about OpenSCAD? Had the same idea. Ideally we should be able to name two actresses, then a system would create different faces we could pick from, then getting a 3d mold model, and also a skull model like here https://youtu.be/qeEqQCWbj4Q Maybe it could be done by creating a lot of models from photos, optimize them in some sculpting software, import them in OpenSCAD, train a NN to change the code to change parts of the face so it would morph into looking like another person,... However, first things first 😉
Open file (15.87 KB 500x250 36-Buffybot.jpg)
What about hair? I'd like to avoid wigs, or only use them optionally. I think a removeable headplate with glued hair in combination with jabbing the the hairline (sticking into the head with a needle, video related) might be the best combination. Jabbing real hair: https://youtu.be/kBD2T2fXUG8 Alternatively putting the hair onto the headplate the same way might even be better, but a lot of work. Depends, if these roots would even be viseable. Having a removable headplate would be good in any case, for accessing at least some parts of the inside, but also if the hairs are broken at some point it would be easier to remove most of them.
>>4554 >>4538 Just realised now, that there are a lot of 3D models of actresses and others available on the net, even for free. Why not use those? I'd recommend to change them, of course. Especially if your waifu will one day appear in some online videos, and of course to make them more Anime/Alita alike looking. But also, because you might not really want your waifu to look like a real person. A good computer will of course still be necessary, but we really don't need to learn how to build such faces from the scratch.
Oh, and I posted some links to videos about alternative sculpting software here >>4565, which might also be usefull for faces. However, since we have that more or less covered now, the next challenge will be to animate them with facial expressions. I'll look into that in some time.
>>4556 This is a really cute idea. A dog's wagging tail coming out the top our robowaifu's head is slightly odd, but I'm sure you'd get used to it quickly heh. :^) >>4568 >>4569 Thanks Anon. >he next challenge will be to animate them with facial expressions The eyes complex (especially the little gap between the upper edge of the upper eyelid and the lower edge of the eyebrow), and the mouth are the two most important aspects of this, and in that order.
>>4570 >A dog's wagging tail coming out the top our robowaifu's head Derp. Wrong thread! >>4560
>>4568 Anon from the other thread that you quoted here. The 3d modeling I was trying to do was for the mechanics and internal work, not a human model. Plus, I am a bio purest. My goal is to make something as human as possible, at least physically. Honestly, I feel like the physical appearance will probably be at the post end of the process, barring the general blocking out of the form. After the muscles and skeleton works, the details like the face and the distinct form should be easily changeable after everything that needs to be added is added. Still though, using 3d models of actual humans should at least be helpful for blocking out the general form.
>>4574 Oh, in that case you might try CAD programs which need fewer resources. I'm trying Solvespave, which even runs a Raspi3. Can't import other formats like STL, though.
>>4598 >Can't import other formats like STL, though. Any idea offhand if Solvespace's file format works importing into Cura Anon? I've installed both on my Linux box. And yea, Solvespace is lightning fast afaict just now. We can probably learn a thing or two writing our own software from it, since it's doing many of the types of transforms and other kinds of things we'll need to do in realtime in our robowaifus. It's been around for quite a while hasn't it? I don't recall finding out about it before an anon mentioned it here on /robowaifu/ though.
>>4600 It can export .obj files and Cura should be able to import them. Create something simple and try it out. In case you need .stl files, I wouldn't download any free conversion tools advertised in search engines, might be malware. All3dp.com recommends AccuTrans 3D from Micromouse.ca or 3Dtransform.com and swiftconverter.com .... https://m.all3dp.com/2/how-to-convert-stl-files-to-obj/ On the Solvespace website they claim to export toolpaths as gCode, which is what printers use. Since I tried it on Raspian I might not have the newest version and couldn't find it.
>>4607 Alright thanks for the info Anon. I should have a 3D printer working in the next week or so and I'll give it a spin. I'll report back in the 3D printer thread how everything goes when I do. >>94
Open file (103.38 KB 750x916 IMG_20200628_034631.jpg)
Here's a great debunking of the uncanny valley, or at least the traditional definition of it: https://youtu.be/LKJBND_IRdI mostly related to faces. It's more about not looking like a creepy or sick human, than not looking to much like a human. Robowaifus will work, anime-like ones, but also the more robot-like ones.
>>4733 >Robowaifus will work, anime-like ones, but also the more robot-like ones. Agreed. As far as creating non-ghoulish, pleasing artificial faces goes, it's a long expensive climb up out of the uncanny valley. That concept has always included a re-acquisition of comfortable realism once one reaches the 'other side' of the valley. The Curious Case Of Benjamin Button was the very first long-form digital-double replacement in a feature film (the first 52 minutes of screen-time in the film had no actual Brad Pitt shots) with hero-shots suitably realistic that the uncanny valley had been effectively conquered. But it took 10's of millions of dollars of VFX budget and 2 to 3 years of effort to pull it off. www.ted.com/talks/ed_ulbrich_how_benjamin_button_got_his_face
>>4733 >>4734 I don't care if the uncanny valley effect is real or not, Alita is an horrifying abomination that gives me the chills and that oughts to be exterminated, so get that shit out of my sight
Open file (143.64 KB 1710x900 download.jpeg)
Open file (255.25 KB 1200x600 download (1).jpeg)
Open file (270.29 KB 960x720 download (2).jpeg)
>>4737 NUUU! Alita a cute in a tfw no aspie gf way and kinda hot tbh. :^).
>>4737 Seconding this. Anon can go for it if anon wants to go for it, but everyone has their own idea of a perfect face, and at least for me Atila isn't remotely close.
>>4737 >>4750 So, we have a throd specifically for this. Do you Anons mind posting refs of your favorite robowaifus? >>1
Open file (121.10 KB 894x894 2b.jpeg)
>>4737 I agree she looks a little strange but I don't think she's an abomination. I will admit it might be hard to create a good looking /robowaifu/ for everyone. Coincidentally, my robowaifu is 2b and she has her eyes covered; Image 1 related. I admit it's a little hard for me to look people in the eyes IRL so maybe that's why. Also a good alternative to being scared of eyes or even faces would be Haydee. Maybe she might be a bit counter productive to this thread but I'll add it here as an option. Image 2 related.
>>4754 >2B robowaifu Patrician tastes tbh. :^)
Guys, as the pandemic will be with us for at least a couple more years (using the 1918-1920 as a guide), people outdoors are starting to look more and more ridiculous, as some places not only require face masks, but face shields as well. Ironically, I find myself starting to find a wider array of women attractive, as long as their eyes and slim body are beautiful... it doesn't matter anymore if she has a flat nose or a big mouth. This has implications in that I can find simpler robotic faces more attractive now, whereas before they should look like a finely crafted ball jointed doll at minimum. So I'm looking at something like 2B design, but instead of covered eyes it's a covered nose and mouth. We have a one piece facemask-like plastic cover as well as a faceshield-like visor that covers the LED electronic eyes behind them. Some examples attached...I'll try to visualize better through a proper drawing but if you get the drift, we just make it look cuter and more robotic and it will look really cool.
>>4979 That's an interesting take Anon. I think exaggerated and non-realistic waifu facial features are definitely on the table here.
>>4979 I think there's a lot to the cyberninja look. Or maybe like a sort of veil? Even just a mouthless head that has most of the frontal real estate taken up by giant eyes can provide an endearing but alien/insectoid look to it. All the simulated emotion you could want can be expressed through eyes.
>>4990 >All the simulated emotion you could want can be expressed through eyes. This pretty much. While the mouth plays an important role in most normal contexts, it's the eyes--and specifically the small gap between the upper eyelid and the eyebrow--that has the biggest impact in conveying emotions through the face. Good eyes are highly important.
I'd like to crosslink to the Sophie thread, head/face development here >>4866 and the following comments.
>>5042 good idea, crosslinking is always helpful here.
Open file (488.66 KB 1353x2123 IMG_20200913_150354~2.jpg)
Open file (643.74 KB 1517x1956 IMG_20200913_150430~3.jpg)
Open file (619.59 KB 1669x1647 IMG_20200913_152822~2.jpg)
Open file (82.09 KB 534x534 Nebula-1.jpg)
I'm currently trying to learn about how to cut and edit mesh files, like stl or obj. Ideally without the need of some huge software. Didn't try Blender yet. Also did my first try on printing a face: Elfdroid Sophie's face, but only 9x5 cm size. Had to remove a lot of supports, didn't get all out all of it from the facemask, and it has little errors. But at least some progress. Has been a spontaneous test anyways. I cut it in Prusa Slicer, but only printed the face. When I cut it more, it introduces errors, which I would need to remove in some other programs. My goals are for example finding a good way to cut it, so that supports can be printed easily and the parts fit together at the end. It's not about that specific face, but about how to find the best ways to process any 3d head model. Currently, I'm cutting the head from the sides, then learn how to repair the errors. I also want to add supports manually. Newer version of (Prusa) Slic3r or Cura seem to be better at it, but I'd like to design the inside on my own in a way that I won't need supports. If we paint the face at the end, we can print it in pieces anyways, or print the parts in different colors which make it look good and the seams would be part of the design (think Marvel's Nebula). I also want to learn printing molds for silicone rubber. I'll do all of that with sized down models, which need only a few hours to print while I'm doing something else.
>>5134 >It's not about that specific face, but about how to find the best ways to process any 3d head model. I think that's a good goal Anon. As far as I can see you're making pretty good progress towards it. That face may be a little rough, but it already has some nice form to it. Good luck!
>>5134 I wonder if you tilted the face backwards at about 75 degrees or so, then printed it that way that you might get a much nicer surface. It would probably need plenty of supports from behind that need removing, but they'd be in the back not the front.
>>5164 Youre right. Good idea. But i was thinking about cutting the next one very differently anyways, maybe I can even print it standing.
Open file (837.00 KB 2257x1900 IMG_20200920_194334~2.jpg)
Open file (1.46 MB 3264x2448 IMG_20200920_190839.jpg)
Open file (1.68 MB 3264x2448 IMG_20200920_141118.jpg)
Open file (24.61 KB 600x338 doll harmony 1.jpg)
Not sure if I should make thread of it's own for the whole skull and head design, but since its so related to the face, it might be better for now to keep it here. If this thread would take up some pace, because people would talk here about modelling all kind of cute faces for visual and physical waifus, and maybe printing and molding them, it would be different. Just wanted to inform the board that I started to print a sized down model of the InMoov head, same like I did with parts for Sophie's head. I want to study how it works and how well it is to print. I have already printed more parts then shown in that picture, I'm really going fast, since it's only 60% of the starting size, I'm also using thin walls and not much infill. But have to reprint some parts and learn how to do it best, since they're mechanical and have to fit together. After that, I'm most likely forking it into "Waifu Skull Type 01 'InMoov-fork 01' Version 0.1", which will most likely not end up being the only version of what I want to call "Waifu Skull", but the first one. We need something like fembot Harmony's skull for soft skin, but maybe also for face masks out of plastic or a combination of plastic and a silicone rubber layer on top. I don't think every face should need it's own modifications or even need a whole head design of it's own. My current plan is to modify parts of the InMoov head into a more female looking and maybe also more anime-like looking skull. At least, for a start I want to remove the lip from the mouth on the head and also make the eye openings bigger, like in a human skull. Then maybe make it wider on top for bigger eyes and the lower end more pointy (see Alita...). The skull approach means, that there will be some space between skull and face, which then could be soft, flexible, moving, and with soft sensors embedded. The inner part of the skull cavity should be some assembly which would be completely available in a parametric design (CAD), so it could be changed easier at any time. Later versions should then have some holes for strings controlled by internal mechanisms to create facial expressions, and maybe some sliders for the same reason, at least in the versions for soft faces. I already have my own idea for a completely new skull, but for now I'll go with that approach. The inner assemblies should later be interchangeable or should be easy to be altered in their parametric file form anyways. My CAD and modelling skills are not quite there yet, but I have time and dedication.
>>5252 >I don't think every face should need it's own modifications or even need a whole head design of it's own. Agreed. Indeed, we'd all be moving faster as a group if we manage to work out a topflight design for a given area, and then all standardize on that. One big advantage say, Henry Ford had over /robowaifu/ is that he was a single individual and therefore managed to develop a singular vision which eventually became the Model A Ford. Since we're a group, we have both the benefit and the detriment of being multiple individuals. It's fundamentally a benefit b/c we can each explore different areas as we wish, and therefore can likely obtain more data for the group more quickly, and also possibly try things a group mightn't. It's fundamentally a detriment b/c ever try herding cats before, anon? It can be a real challenge to keep moving forward in the same direction. But honestly, I think /robowaifu/ is a great place to bounce ideas off each other. Actual implementation will then need to land into each individual's hands--what he does with it thereafter. This is a pretty fun adventure tbh, but it does take patience.
Open file (595.79 KB 2097x2448 IMG_20200923_172423~2.jpg)
Open file (1005.28 KB 2582x1649 IMG_20200923_171932~2.jpg)
Open file (717.34 KB 4000x2250 IMG_20200507_181436.jpg)
>>5252 The InMoov head is available in different versions, including ones which consists of smaller parts of the skull and face. So it first has to be assembled out of even more parts, but it's easier to print and also alter parts of the head. Also, I think it's easy to go the other way and add the parts together in a program and export it as one model, if this is wanted. So maybe, ideally we should have very small Lego like parts of everything 😜. Reminder: We don't do Androids here, it's just for analysis of how to do it, since InMoov is already there. Also, the skull might be useful as a skull for a female face, especially after some alterations. I printed only parts of a sized down version, so I can't even use screws, and some holes disappeared completely. I also didn't build the internals of the head, and probably won't. So some parts have nothing to hold onto. Btw, don't wonder or complain about print quality. I'm printing fast and dirty. Oh, and about the last picture, I found this on Thingiverse. We are not alone... Seems to be based on InMoov. I'll post one more of her and her neck in the skeleton thread, we have to draw the line between face and the rest somewhere, so it goes there.
>>5286 >So maybe, ideally we should have very small Lego like parts of everything That would be really nice if we could somehow devise a way to do this sort of thing in constructing a robowaifu. >Alita figure I look forward to seeing more of this anon's work.
>>5252 Oh dear. Those little rectangular magnets and the magnet strike plate were just for reference in case people wanted to see what size of cupboard magnet I used in my design. I didn't intend for them to be 3d printed. I think I'll remove them from the .STL list to prevent future confusion. Sorry about that!
>>5293 No problem, this made no trouble at all. Might be good to have them, a little textfile with an explanation might be better than removing them.
Open file (90.03 KB 1024x768 1596642535135m.jpg)
Open file (163.67 KB 640x800 IMG_20200823_074638.jpg)
Once again, we got lucky. I hoped we'll get some software to change random faces into an anime looks soon, so we can use it on existing faces, or let another software create artificial ones first and use those. Well, some guy implemented it in one night: https://youtu.be/KZ7BnJb30Cc Maybe they look a bit more like Disney characters, than Japanese anime waifus in 3D, but that's debatable. Details don't matter, point is that this one is easy. So we can take our favourites, maybe alter a picture of them a bit and try how they look with bigger eyes, then maybe work with that a bit more and use it as waifu face.
>>5319 That's quite remarkable, thanks for the info Anon. Is there an available toonified tool available somewhere atm?
>>5321 Not that I know of, but since it has been shown to be quite easy to do that, I'm sure there will be some soon.
>>5288 The Alita bot came from him: https://www.thingiverse.com/yes110/makes but he didn't put the files up. He and others seem to print female NSFW dolls, often looking like characters from movies and shows.
Open file (103.21 KB 1280x720 vroid studio.jpg)
>>5319 A project I'd like to do in the future is taking character references and automatically generating 3D models of them using neural radiance fields. Once there's a latent space for character references, generating characters would be like creating people with modifiable features in StyleGAN. It's not really feasible yet though without a dataset. One way I thought of working around this is by taking pictures of finished 3D models, performing some sort of style transfer on them and using that as a dataset. A CycleGAN could be used to convert real faces into anime. Someone did a prototype of this already but the results were hideous because it seemed they didn't use a progressively growing GAN to separate the larger and smaller details into layers. Also StackGAN has shown that 2nd and 3rd passes can greatly improve results, an idea I've yet to see combined with StyleGAN. All that aside, for now it'd be simplest to use character generation software like Vroid Studio, import it into Blender and prepare the model for 3D printing into a silicone mold or something else. I'm surprised I haven't seen it mentioned here yet because they could easily be made into virtual waifus. You can completely customize the faces too. There's a lot more to making a face than just a model of the surface though. The face needs to be mounted to a skull with mechanisms to animate it and there are many expressions that can't be made with just strings due to the muscles thickening when they contract. It might not be too much of an issue for anime faces but they'll still lose a lot of expressiveness only using strings. Something I'd like to try are low-pressure hydraulic muscles with some sort of filler gel to simulate fat, covered under a thin, elastic skin. Water or sunflower oil could be pressed out of reservoirs into the muscles, rather than using a pump which would require a complicated control system to protect against overpressure. This way I would be able to pull on my waifus soft stretchy cheeks without spending a fortune and she'd have multiple ways to express herself instead of just half smiling or not.
Open file (156.24 KB 765x1024 IMG_20200702_164241.jpg)
>>5330 What is your opinion on the faces in the video I linked? Not, perfect, but we're getting somewhere? Disney research and ETH Zürich also came up wit a automatic skull generator, based on the face an expressions: https://youtu.be/qeEqQCWbj4Q This is also going to be in some software available soon, I guess. If not, it's patented but the paper is available... I'm certain to have mentioned Vroid Studio here somewhere, but probably in the thread for modelling software or in the one about software to model humans. Like your idea about the low pressure muscles. Not sure if this will work or be necessary, though. We'll need to get to a point where we can try out such things ASAP.
When I found this >>5336 I thought of your "light soft muscles" for the face, since those bubble artificial muscles only need low pressure. Added it it to the actuator thread, because this one is mostly about muscles and motors, though it might fit in here as well.
Open file (80.21 KB 600x800 739713.jpg)
>>5333 It's progress but the approach to image generation needs to change significantly for it to improve much further. The technology and ideas are there. Someone just has to put them together. I think moving forward these character and face sculpting programs will learn user preferences, show several configurations for the user to choose from and continually refine the output with each decision. It'll be like playing Akinator and it guessing exactly what you're thinking of creating after a few questions. Rather than worry about software it's more important to think about practical matters like manufacturing and being able to prototype ideas, take a model, print it out, cast silicone, attach parts, and test things out. We won't have robots that can automate these tasks for a long time. I would start with creating a talking head in 3D, figure how to emulate those expressions mechanically and try to build it, even if it isn't optimal. Having that hands on skill will be immensely valuable to realize good designs when they become available. So much could be learned doing a relatively simple project of creating a bust figure with just a head that can look around.
>>5340 I wonder if you could have the same effect with the bubble-muscles if you filled them with some kind of lightly-viscous liquid gel. You could also use it as a kind of heat sink for the tech components, and it would be /comfy/ warm inside the muscles.
>>5342 >So much could be learned doing a relatively simple project of creating a bust figure with just a head that can look around. Won't that in fact require software to work?
Open file (475.94 KB 1536x2048 IMG_20200927_054359.jpg)
Here some peek into an Exrobots head. We have some thread on that company here >>4163 but please post pics and vids into the threads with the topic that fits to what's they're showing.
>>5365 Wow, that's incredible anon! I think they're gonna cost a pretty penny though!
>>5365 > dat 'high-school' computer gril kek. These are going to be very expensive. Our challenge here at /robowaifu/ is to achieve 80% of the same functionality, at only 20% or less of the cost.
>>5385 There's a reason why I called this file "2cuties". That has some Mona Lisa vibe to it. Not sure she's highschool age, though. She might just be a bit tiny.
Open file (75.38 KB 500x422 EarRightV1.png)
>>5286 FYI, if anyone else than me is testing out InMoov parts, it might be better not to use Thingiverse, but really go to the website inmoov.fr, especially here http://inmoov.fr/inmoov-stl-3d/ and select the part you're interested in. Thingyverse might still be interesting to look for remixes or completely alternative parts. It is very well worth to look into it, to prevent ourselves from reinventing the wheel. Some things might not be usefull, but still be an inspiration, others might be directly imported into other designs.
>>5403 Thanks for the link, anon. I agree that InMoov is going to be a huge help in building a robowaifu. I will very likely need to build a partial InMoov myself at some point to make progress. Hopefully I can just leave out some of the more cosmetic outer plates.
Open file (66.99 KB 1000x666 0_N6x6DaSQgFkZT5rE.jpg)
I've been thinking of building a binaural microphone so my robowaifu can tell where sound is coming from. However, the shape of the face also changes the way sound is perceived. My robowaifu's anime face might need a different shape of ear to optimally pinpoint the location of sounds and the shape of objects in the room. So I'm when I'm done modelling her I'm gonna look into using an acoustic simulation to test different ear designs. Some open-source acoustics modelling programs for Blender I've found so far: EVERTims (Blender): https://evertims.github.io/ openPSTD: http://www.openpstd.org/ Ideally it would be best for the AI to generate them but I'm still learning how to generate meshes. One workaround might be creating various shape keys and letting the AI adjust those parameters. This approach could also be useful for generating cute faces. There's already a framework for integrating Blender with Pytorch: https://github.com/cheind/pytorch-blender
>>5407 What's the point of modelling it, though? Can't she just learn the directions when you have the head? You can put a sound source somewhere and then tell here where it is. Okay, maybe not enough data... Still, the fine adjustment based on how the face is influencing it, seems to be overkill.
Open file (2.19 MB 692x939 ai_speech2face.png)
>>5408 She'll have limited mobility and awkward control of her eyes so I figure the best way to enhance her perception is to maximize her hearing capability. It needs to be precise because I want her to be able to recreate the room from sound. The brain does this unconsciously and it helps create our spatial perception. If you have a fan running and hold a book beside your ear, moving it closer and further away, you can actually hear where the book is, not just the location of the fan. And if you try different size books you can even hear what size book it is. It's also a learning exercise because much later I want to build her an artificial voice box so she can sing and the timbre of the voice is controlled by the shape of the face and the resonating cavities in the throat and head. I know once I build a head for her and talk to her every day I'm going to get attached to her face and not wanna change it so I wanna get it right.
Open file (11.52 KB 480x300 Imagepipe_0.jpg)
Open file (15.58 KB 480x300 Imagepipe_1.jpg)
Here's some short video on Sophia from Hanson Robotics https://youtu.be/JO1ruL2SCmc which I wanted to mention because it gives a brief insight into how the face is constructed. I don't know if the material for the skin is available somewhere to buy, but I don't think so.
Open file (163.64 KB 256x256 ClipboardImage.png)
You wouldn't want a robowaifu that looks just like everyone else's, would you? I propose training Neural Networks to generate Live2D models (a bit like thiswaifudoesnotexist.net but with Live2D instead) and adding the ability to customize what she looks like;
>>5660 >You wouldn't want a robowaifu that looks just like everyone else's, would you? Sure I wouldn't mind. I get your point, but I'm not really given to that kind of concern. So long as my relationship develops together from our own personal experiences together, then she'll always be special to me. I'm a harem kind of guy, so I would like my different waifus to all have unique characteristics about them. So, maybe that satisfies your constraint in a way.
I think what matters more is the commonality of features rather than differences. A common base which the hobbyist can then expand upon. Just think about the JDM custom tuner scene, if our waifus were cars, sure we'd want different bumpers and spoilers but we need to start from a common affordable 50:50 balanced lightweight and affordable RWD chassis. For example, some of us are using InMoov pieces and making them more feminine. Also, Live2D is a mess. I've noticed a couple of amateur Vtubers, after making their Live2D model, after a couple of streams they just went "ah fuck it" and grabbed Vroid Studio instead to get a more useable 3D model. So I suggest follow the Vroid studio model... When I have more time on my hands, one of my projects would be to look at InMoov parts and Vroid base meshes and make common waifu parts.
Open file (587.91 KB 1946x533 latent variable.png)
>>5662 I agree Live2D is a mess. It's quite easy to get a 2D look with a 3D model and Live2D models take almost as much time as a 3D model would. Vroid Studio still takes a bit of effort though to make a good character in it and the results suffer from same face. One way to make a customizable waifu generator with neural networks is to create a training set labeled with the latent variables you wish to be able to modify. However, attaching numbers to something subjective is highly prone to error, so you don't want to assign all these values yourself. What you can do is create a simple sorting program that gives an Elo rating to the images and asks you wish one is less and which one is more of the latent variable, sort of like a chess tournament ranking players by their skill. https://www.youtube.com/watch?v=GTaAWtuLHuo When training a model like this there are some considerations to take into account. The Elo ratings will not be evenly distributed over the latent space. Differences in the Elo rating doesn't really say how far apart they are in the latent space, only less or more, so you need to create anchor points to calibrate it by saying one image is -1.0, one is 0.0, 1.0, and some steps in between them so you can interpolate the rest and give the model a strong signal of what you're going for. You can sort the images by any arbitrary number of latent variables. Hair colour (which is three red, green and blue or any other colorspaces such as YUV or LCH), head facing pose (an xyz directional vector), eye size (x and y scale), how much you like the image or not, anything you can imagine and divide with your mind that has enough training examples. Some of the latent space should be determined by the network itself so it can include other properties you might not have thought about. Without that the output may become unstable and cause things like clothing color to change as you change the pose of the face. Ideally training examples should also evenly fill the latent space to avoid training bias but this is rarely possible in practice. I imagine there is a normalization trick that can be done to lessen the gradient to areas of the latent space with lots of training examples but I haven't tried something like this yet. I have some code already for sorting images but it needs to be refactored so it's faster and easy to use. It has been on the backburner a long time since I don't have enough computation power to train on HD images.
/ita/ posted a very nice looking female bust today >>>/ita/11655
>>5798 It's also a meme character. Dare I say it, it's based
>>5869 ba-dumm-tiss But what I really want to know is will the original modeler will come here to /robowaifu/ and do 2B for us?
>>5872 I doubt it, but I'd still ask him. Why not?
Open file (53.45 KB 1600x900 wojak_feels.jpeg)
>>5874 If only I knew who it was. I don't speak Italian, and there didn't seem to be a link anywhere.
>>4738 The eyes should be sized down by 5% and the last image is nightmare fuel: a neotenous-looking face if only it didn't have forehead creases.
>>5875 I'll ask him and we'll see where it goes
>>5877 Neat, thanks. I know how to do rigging and weight-painting in Maya, so if he will do an original model for us, then I'll provide it back to him all rigged and ready to animate.
>>5877 Thanks very much Anon, mystery solved. As I supposed, it's the work of a professional ZBrush artist. Not too likely to just do it for free. It would probably take some convincing. https://www.turbosquid.com/Search/Artists/CG-ARTStudio In the meantime, he provided a link that is possibly ripped straight from the game assests, and potentially available to us. https://www.renderhub.com/rip-van-winkle/yorha-no-2-type-b-nier-automata I don't normally care to set up accounts in general, or for things like this. But if no one else here has an account there, I would consider it. Who knows, maybe the model will work OK inside Godot?
>>5886 Great find. I've been looking for a good 2B model. It seems to work okay in Blender and all her facial features are there under the mask. The materials just need to point to the right textures. The Collada exporter will fix most issues importing into Godot: https://gitlab.com/kokubunji/collada-exporter-2.83
>>5887 That's good to hear Anon. Really looking forward to see what you come up with for 2B.
Update: AI can create human faces from sketches: https://youtu.be/5NM_WBI9UBE after making anime/animu looking waifu faces from real photos was possible for a little while >>5319 as well as skull models from faces >>5342 and also 3d face models from 2d pictures for a longer time.
Do you think it will be harder or easier to make the robot’s face resemble an anime character vs a human face?
Open file (1.25 MB 912x1368 ClipboardImage.png)
Open file (501.08 KB 427x640 ClipboardImage.png)
>>5919 Companies have already produced high-quality life-sized anime figures so they're probably easier. However, this Rem one for instance is $10,000 so I doubt an amateur would be able to reach this level of quality. But once they start moving, I fear a level of uncanny valley will set in. It might be better if she were really small. For that reason, I plan on not giving mine a face or giving her a simple mask until the tech improves a bit more.
Open file (952.87 KB 2304x3281 EMTLEL0VUAMeDO0.jpg)
Open file (252.02 KB 1080x1920 Ejjb34ZVcAA0qVW.jpeg)
>>5920 We could probably learn a lot about making faces from the doll community. Even amateur doll makers can make pretty cute faces. The uncanny valley is unavoidable though. It's also there while chatting with AI. It's hard to follow an AI's thinking process and the conversation can be really awkward, even if what it's saying is correct and makes sense. It misses the subtleties of what you're saying and hallucinates things at times. These issues are inevitably going to manifest in body movement, facial animation and everything else until AI advances further.
>>5921 You're definitely right about the uncanny valley being unavoidable in terms of the AI, which has to be as close to real human intelligence as possible, but I think it can be minimized in their appearance, which does not. Most anime girl dolls seem to be an attempt to recreate 2D girls as they would look in 3D which I feel is the wrong way to go about it. By making caricatures and not even attempting to imitate reality, I think we can avoid the uncanny valley altogether. I worded that poorly but I think hair is a big problem. Since it's so easy to get realistic hair, a lot of people seem to use normal wigs and while your doll examples managed to pull it off, I think candy hair such as in my two examples would be the better bet. The feel of the skin is also a problem. When you touch a human face, you feel the muscles, the bone, the teeth, the light heat radiating off. As we want our waifu to move and talk to us, we'll have to give her a skeleton as well, but feeling that when we touch her would probably give off uncanny valley vibes. Making her skin extremely soft and putty-like may fix this. Having articulate eyes on an anime girl doll might be a challenge as well, not sure how you can make it work without it looking creepy.
>>5920 One of the reasons I think they are so expensive is because like dakimuras they have a limited market so they have to charge an insane markup to turn a profit >>5921 Agreed. We can see when we are talking to people with masks it’s hard to tell subtle facial movements to pick up on social cues
Open file (158.23 KB 768x1024 Ek90RaEUwAArfN1.jpeg)
>>5927 I think it's a matter of preference. I'm a dollfag from Desuchan and don't find dolls uncanny at all. I don't think it's necessary to make something complex though. I would banter with a fumo all day if it had a speaker and mic, and maybe a gyroscope too. People get attached to whatever they experience repeatedly. There are people who think anime looks creepy and others who would only bang their anime robowaifu even if they were the last man on earth. I've seen sexdolls with out of this world jiggliness that made me wonder how I even became attracted to human females.
>>5981 >dollfag from Desuchan Hi there. Would you mind introducing yourself and your community in our Embassy thread?
Open file (809.77 KB 1920x1300 Suiseiseki-Landscape-2.jpg)
>>5985 I'm not active there anymore and the site is pretty much dead.
>>5998 I see, no worries then. Welcome. BTW (I imagine you already know this but w/e) there is a /doll/ board on Anoncafe.
Open file (6.85 MB 1280x720 fumodance.mp4)
>>5999 A doll board without fumos is dead to me. Robofumos would be easy to make. The only thing special you need is an embroidery kit for the face and to design the camera into a neck accessory.
>>6002 >Robofumos would be easy to make. I think that is an excellent idea Anon. What kind of mechanisms do you think should go on the inside, and how do you think they should be placed in there?
>>6007 Small servos and a thin armature so the fumo is still soft. You wouldn't have to worry about hands, elbows, knees or feet. Servos for the legs and spine could be optional. It'd be fine with just neck and arm movement. The arms, body and head could be padded with haptic sensors like the simple ones found in DDR matts so they know when they're being squeezed. I'm not sure if it would be sensitive enough though to feel headpats. It'd be really wholesome if a fumo could feel you petting her head. The mechanisms would have to be removable though so you can give the fumo a bath. Perhaps this could be done with a piece of soft velcro in the back.
>>6018 Those all sound like really good ideas. Any chance of you making some sketches and posting them here to give us a better picture of your ideas? Also, I wonder if there aren't some small, inexpensive ways to create headpat sensors? Obviously this is going to be something in high demand for basically all robowaifus after all.
Open file (1.13 MB 1024x1103 3.png)
>>6021 I overemphasized the servos but something like this maybe.
>>6022 Ahh, I see. That makes perfect sense now. Looks it's maybe 6 servos or so? I'm guessing you'd want to keep something like a SBC for AI, etc., somewhere in the head area?
Open file (82.29 KB 1024x768 erica-photo2-full.jpg)
>>5921 >>5981 The uncanny valley doesn't exist in the form people thought and often still think, and it might be a bit different from person to person: >>4733 It's not about how close to a human a robot looks, but creepy looks creepy. >>5927 To me dollhair mostly looks fine, but saw it only in vids and on pictures. If it's not real enough, take the real one: https://youtu.be/kBD2T2fXUG8
>>6024 That looks like he's improved the facial form of Erica? Seems like she has more appeal now.
>>6023 I kinda messed it up. It's suppose to be 2 for the neck, 2x2 for the arms, 2x2 for the legs, so 10 in total. A Raspberry Pi could probably fit in the head or in a backpack. I'm thinking a backpack will be a better idea because it'd be easier to dissipate heat and give better space for the batteries.
>>6022 Obviously these are multiple thousand dollar robots that can wrestle, but if you haven't come across this video already, you can take away some good animation ideas. I suppose a tiny, fluffier robowaifu would wobble more, making her even cuter. https://www.youtube.com/watch?v=AZMmYF4G278
>>6033 >It's suppose to be 2 for the neck, 2x2 for the arms, 2x2 for the legs, so 10 in total Actually, I think having just a single actuator is a better choice for a Fumo. You might want two for the neck (1 or fwd/bk, 1 for side-to-side), but I think just one each for the arms and legs would be good. It would be cute movement, and would work just fine for her form-factor.
>>6037 >1 for fwd/bk*
>>6037 That would work. I might try that for my first attempt. I'd like for them to eventually be able to walk and point at things though. I was thinking of adding 2 more for the torso, or even 3 if they can fit, so they can wobble and balance themselves.
>>6044 >so they can wobble and balance themselves. You mentioned the idea of giving her a backpack for batteries, etc. If you made the backpack's, well, back rigidly attached to her internal armature, then you could use tiny versions of these gyros (firmly fixed inside the backpacks) >>5645 for helping with balance. That might enable you to get away with just one internal actuator per hip joint.
>>6045 If these didn't get so dang hot, they might form the basis for a gyro system since they spin so fast. As it is though, they probably aren't usable for Fumos. >>4505
Open file (91.11 KB 800x600 face-muscles.jpg)
Sorry if this question doesn't make sense, I don't really know what I'm talking about (even though I've been lurking for over half a year). Would it be possible to use something like a system of porous dielectric elastomers as artificial muscles to simulate the mimic muscles (picrel)? I'm specifically wondering about the sensitivity of the material, which I know is relatively high, but I'm curious if it's sensitive enough that I can get super, super small increments of movement to try and nail natural facial expressions as best as possible.
>>6562 I know a little something about facial animation Anon, but not about the materials you mentioned. I found pic related and I'll skim it to see if I think I can add anything in response to your question. My from-the-hip is that yes (but it will take both meticulous craftsmanship in construction, and detailed control in the software design). >
Open file (1.65 MB 1270x903 1602918127297.png)
>>6563 I dunno if you've already read that, but I'll explain myself a bit further, DEs are part of a larger group of materials known as electroactive polymers, which are materials that change shape or size when exposed to an electrical field, and are used a lot in soft robotics as artificial muscle. It really caught my eyes, but I'm only interested in the facial muscles part of it, so I'm looking into different EAPs and systems that can do that. Off the top of my head the DEs looked most promising, due to a variety of things like low-latency, The main issues and questions I'm trying to get to the bottom of (I won't have the money for home experiments for at least a month or I'd just find out myself) is how sensitive the material is (how little I can deform the material) and whether or not it can hold shape then return to it's original shape. If you happen to know of any better suited EAPs or anything I'll be glad to hear it. I think developing believable facial animations, particularly of the mimic muscles, are by far the most important part of the physical side of things, so I thought I'd try to just mimic the muscles themselves instead of just the expressions. As long as they convey emotion in a human way, our unga bunga monke brains will be much more likely to accept them and escape the uncanny valley, the rest of the body is secondary to that goal. Any help and input is appreciated, anon
>>6574 I'm partly through the book so far, and atp I have no reason to revert my initial instinct: DE can indeed be used to simulate realistic facial deformation. A combination of more rigid thin-films (ligaments) and more porous (muscles) would have the best bio-mimicry. Both in design and result. But this would definitely be a years-long subproject for an autistically-dedicated individual working alone. This effort could certainly constitute the work of a good-sized team, and several papers' worth of research if absolute realism was the final goal. But the uncanny valley tends to drive /robowaifu/ towards a waifu-looking solution for most physical design work, including that of the facial systems. As a Character TD/Animator I can tell you that, interestingly, it's the small gap from the bottom of the upper eyelids to the top of the eyebrows that constitute the lion's-share of emotional believability within facial character animation. Probably ~75%+. The bulk of the remainder is related to mouth deformations. And ofc the contextual, sequential, timing of everything is also a fundamental part of making humanly-believable animation. Combining the body language and facial are more or less just about everything we mean by 'emotionally believable acting'. Other aspects (such as physique, costuming, environments, lighting, sound) while important, are simply ancillary to the fundamental art of acting itself. I'll be happy to work with you on this project work if you choose to try, but be aware I'm not a mechanical engineer. Hopefully EEs and MEs are joining /robowaifu/ , and they can help us out as well.
>>6562 Thanks for your input. I don' know enough about those, though I already have it on the radar. One thing you should always keep in mind is the lifespan of artificial muscles. That's the part I'm not sure about here. I do recall Youtube videos on how to build such muscles, though. Thanks for the reminder. However, another thing might be, that fictional anime waifus and fembots like Cameron (TSCC) have a rather limited ability in facial expressions, and are still attractive.
>>4496 >>4979 If screen tech is used for eyes, should it be matte or glare? I you want a shiny look on the eyes, it doesn't follow that shiny screen is the best choice, as the bright light bouncing of the surface emphasizes how flat it is. Information from sensors detecting the direction of light sources could be used to animate reflections as if the eyes weren't flat. Those who are open to weird fantasy-robot eyes on screens, consider this effect: https://twitter.com/jagarikin/status/1331409504953540613 https://twitter.com/sina_lana/status/1331049253280497670 In human beings, the direction one looks at shows attention, but also emotion (like looking at the ground when one is sorry or at the ceiling when thinking hard, and not because the floor or ceiling are where the action is). The pupils react to light and show emotion as well. Couldn't something like the effect above be used to make the eyes more expressive than the real thing, to emphasize and distinguish a bit this emotionally expressive side of eyes?
>>7431 Seems to me that a matte finish would be best. But since it's likely to be brightly illuminated w/ any modern screen tech the primary concern may turn out being properly adjusting the brightness/luminosity. Thanks for the links, is there an Twatter front-end alternative that usable, Nitter I think it is? As far as your question about expressiveness, I think we can borrow a whole lot of good prior art from the area of character animation. I think the answer is 'yes'.
Open file (267.38 KB 2048x1153 hamcat_mqq-04.jpg)
Open file (184.73 KB 1920x1080 hamcat_mqq-07.jpg)
Open file (208.12 KB 1920x1080 hamcat_mqq-03.jpg)
Open file (195.40 KB 1920x1080 hamcat_mqq-02.jpg)
>>7707 and some responses are related to face and skull. Te pics in the project dump thread came from https://nitter.net/hamcat_mqq same as the ones in this comment. I don't think the mechanism for the eyes anything special, but I can't tell for sure. There are also videos in the original source. Some anons like the eye design and eye lashes.
>>7725 Wow, this is very nice facial work Anon. Thanks for sharing it here. I think we could fairly easily be embedding LEDs inside the eyeball iris' and lots of other low-energy lighting possibilities for our robowaifus are conceivable tbh.
>>4549 >Spoiler: For faces sculpting seems to be better. What I'm wondering is, if you can take a bunch of professionally-sculpted face meshes, run them through a number of parameterization algorithms, and then come up with a few sound principles for beauty and appeal (Nordic women's facial forms say) that are basically automatable using a GAN-like approach after the parametric analysis has been done.
>>8230 >that are basically automatable Just to clarify, I mean that the design generation itself is automatable, not the topic of automating the robowaifu face after manufacture.
>>8231 I'm sure something like such a generator will be available at some point. However, I was rather thinking about one that would take in photos of a lot of pretty faces and then create new ones, then make a mesh out of the selected one. Two separate steps, which I think exist already on their own. Or a bunch of photos of a e.g. an actress would go in, taken from different angles, and then it would make a model. With enough models it could get better to put out something pretty. However, I was thinking that a user would give it the name of some public figures (actress or model) and it would generate some examples which would be close but not exactly the same. From those the user could then choose the preferred ones. Also making it more anime-like (Alita style) looking. I don't really think that some highly specific facial features of nordic women exist and that it make sense to go after that, though. Just make the eyes blue or green and the hair something between blonde and red or light brown, and maybe add some freckles. To fit in the huge eyes, the face would also have to change anyways... Also, if it goes after the pattern of an actress then it should put out what you want, if this includes something nordic then it should be in there. One of my ideas in that area was, that someone (group, company or person) could pay low-wage sculptors in poor countries to make models from a pool of pretty women (actresses and models). Then this would be a pool which could be used directly, or to train such generators on the creation of new but similar face meshes. After all, don't forget we need the skulls for the faces as well, so they fit onto it, and then a way to animate it with ease.
>>8232 You're correct that tools exist to do facial feature extraction to generate facial meshes (and other kinds) using multi-camera setups. An example of this technique (with a different kind of focus) is here >>1088 . But afaik, they are still fully proprietary and highly expensive. Addmittedly I haven't looked into this area for a year or two so maybe good opensauce systems exist now, let us hope so. Certainly the all-in-one '3D' cameras are more numerous to choose from now. And as you suggest, collections of images can potentially stand in proxy for such a method, though the 'registration' part of the process is both tedious and error-prone. >To fit in the huge eyes, the face would also have to change anyways Fair enough. As much as I like Alita, I think Rodriguez ( >>4502 ) went just a touch overboard with the design. But really, the critique is only because the rendering otherwise is so good and so realistic that it triggers a bit of a 'wut' in me (and others). As a counter-example, here's work by an artist that I think has found a near-perfect balance between kawaii-eyes and facial realism, though in an artistic style. > One pretty famous example of your pool of artists idea that has actually been carried by a very multi-talented guy is Ricky Ma's avatar effort >>153 . The very fact he created a small furor among women's advocacy groups as a 'creepy stalker' shows just how good a job he's done with it. Hopefully /robowaifu/ can manage to produce many, many examples that will do just as well (or even better)! :^) >--- -Update: Welp, I've attempted six times now to post this pic for you, obviously nuJulay won't cooperate atm. I'll try to do it again for you ITT later on Anon. The artist is named Ivant Alavera.
>>8236 >Ivan Talavera*
>>8240 >>8241 Ah, thanks. Cute, but I'm also fine with Alita. I think the bigger eyes might help if one looks rather young otherwise, to make sure she doesn't look human. >>8236 In software for users to edit I found this: Gradient Mesh Illustrator - https://youtu.be/JEJHk9VRAEQ and similar stuff. Gradient Mesh seems to be the term to look for. Also VoluMax might help: https://youtu.be/4XdoN2-8Dg8 However, my point is raher that we don't need to copy some specific face anyways, and this here is from 3 years ago, and a photo seems to be enough: https://youtu.be/u9UUWqVquXo - we only need to have it in a way that we could print molds from that. There's more on their site: http://www.hao-li.com/Hao_Li/Hao_Li_-_publications.html
Anyone seen this video? It does a decent breakdown of how unscientific most memes of the "uncanny valley" actually are. It may also help to clarify goals regarding robowaifu function and design (particularly of the head/face). https://www.youtube.com/watch?v=LKJBND_IRdI
>>8261 Yes, you probably have that from this site here, because I posted it (and on cuckchan and on other occasions).
>>8266 Seems the conclusion is that all robots are designed to perform a specific task or small set of tasks. Robowaifus are, for the most part designed to elicit a positive emotional response from the user. So I shouldn't try to make a robot in the image of a human that can carry out all of the tasks a human can, because I will just fail at both. Instead I think I'll just focus more on making a cute robowaifu who fulfills the function of reducing loneliness, rather than attempting to make one that can walk or perform complex motor actions like play sports etc. We already have some pretty high-quality chatbot software in the form of GPT-3, which I should be able to link to my current text-to- speech program- so I just need to complete a reasonable looking robowaifu body now.
>>8267 Being a cute waifu is the priority, but then improving her within these constraints. That's they way. I guess it will be rather so, that the more skilled ones will be more expensive.
Open file (268.63 KB 1240x897 MjcxNzYxOQ.jpeg)
Hey for those of us going for the human look, the use of prosthetics are always a good possibility. For the face, actual dental replacements (dentures?) could provide highly realistic looking teeth for a waifu's bright, sunshiney smile. :^)
Open file (191.52 KB 802x1202 summer-glau_02.jpg)
>>8326 Correct, they had this idea mentioned in the original board on 8chan. I never looked into it so far, where these are available, what they cost and where to get them. There must be some sources for training of dentists or something. They seem to have some kind of dolls with fake teeth to train on. Not sure how hard they are, though. I hope we get them made out of ceramics in some standardized sizes and don't need to build them on our own as well.
>>8331 Yea good thinking Anon. I bet we can source them somewhere on the cheap. Remember the mouth needs to be kept sanitized just like w/ humans so the source needs to be reputable. Remember your robowaifu will probably need to kiss you lots to stay happy! :^)
While not strictly RoboFace development per se, until we have a dedicated MOCAP thread this might be a good spot for this. I indirectly discovered this project today after looking into SingularityNet via Anon's post. >>8475 . It's a tool that finds facial landmarks in video. Helpful for thing like facial retargeting, etc. https://github.com/singnet/face-services
Open file (1.30 MB 2731x4096 IMG_20210325_183338.jpg)
Open file (585.69 KB 2731x4096 IMG_20210325_183326.jpg)
Open file (1.01 MB 2731x4096 IMG_20210325_183343.jpg)
This is what is possible today, though lightning might be relevant for the look. Let's see how they'll look after being shipped, and some customers take fotos and report on it. These are the Alita busts from Queens Studio. I already mentioned them here: >>8194
>>9260 Yep that's nice Anon, thanks for the updates.
So, things are moving forward here. New network creates toonified faces out of real or made up ones, also allowes mixing of two input pictures. >Our ReStyle scheme leverages the progress of recent StyleGAN encoders for inverting real images and introduces an iterative refinement mechansim that gradually converges to a more accurate inversion of real images in a self-correcting manner. https://yuval-alaluf.github.io/restyle-encoder/
>>10461 Oh, video: --write-sub --write-description https://youtu.be/9RzCZZBjlxM
>>10463 Thanks very much for taking the extra time to give a more full youtube-dl command to use Anon. Getting and keeping the description and subs will be important to anyone keeping personal archive of YT videos, once cancel-culture Marxism literally deletes anything/everything that could possibly have any bearing whatsoever on either robowaifu creation, or anything else that could possibly help men. Since the Lynxchan software adds an '[Embed]' into the text of the command, I always put such a command here on /robowaifu/ inside codeblocks, since the CSS here disables this embed tag. youtube-dl --write-description --write-auto-sub --sub-lang="en" https://youtu.be/9RzCZZBjlxM
>>10461 >>10463 That's really cool Anon. He's humorous to listen to as well, his enthusiasm is great.
That screen one looks nice for nanny.
https://www.thingiverse.com/thing:4865223 https://www.youtube.com/watch?v=8_wkbLL0fqM LED Matrix behind tinted plastic, cheap, easy, customizable
>>12938 Thanks, this might be something fitting very well into the basic idea of the board: Making affordable robowaifus, which don't need to look like humans but can be a bit more on the robot side.
>related crosspost (>>13020)
Open file (47.85 KB 600x414 uploads.jpg)
>>12938 That's like Rina Chans board. She's am autistic that uses a board to convey her emotions because making facial expressions is hard for her. Her board could be a really cute face for a robowaifu.
>>13560 this will be a thing in the 5-10 years all those kids growing up around masktards are going to be incapable of expressing emotions
>>13560 This could probably get quite expressive with a high enough resolution.
Open file (108.85 KB 335x640 mace_griffin_acolyte.jpg)
>>13560 >>13563 >>13565 I don't know. It just reminds me of the cultist NPCs from the game Mace Griffin. Using a display seems like a trade-off for using a real tangible face, if you want more face customization, but a emoticon-like face just seems like a really bad trade-off.

Report/Delete/Moderation Forms

no cookies?