/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).


Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...


AI Design principles and philosophy Robowaifu Technician 09/09/2019 (Mon) 06:44:15 No.27 [Reply] [Last]
My understanding of AI is somewhat limited, but personally I find the software end of things far more interesting than the hardware side. To me a robot that cannot realistically react or hold a conversation is little better than a realdoll or a dakimakura.

As such, this is a thread for understanding the basics of creating an AI that can communicate and react like a human. Some examples I can think of are:

>ELIZA
ELIZA was one of the first chatbots, and was programmed to respond to specific cues with specific responses. For example, she would respond to "Hello" with "How are you". Although this is one of the most basic and intuitive ways to program a chat AI, it is limited in that every possible cue must have a response pre-programmed in. Besides being time-consuming, this makes the AI inflexible and unadaptive.

>Cleverbot
The invention of Cleverbot began with the novel idea to create a chatbot using the responses of human users. Cleverbot is able to learn cues and responses from the people who use it. While this makes Cleverbot a bit more intelligent than ELIZA, Cleverbot still has very stilted responses and is not able to hold a sensible conversation.

>Taybot
Taybot is the best chatbot I have ever seen and shows a remarkable degree of intelligence, being able to both learn from her users and respond in a meaningful manner. Taybot may even be able to understand the underlying principles of langauge and sentence construction, rather than simply responding to phrases in a rote fashion. Unfortunately, I am not sure how exactly Taybot was programmed or what principles she uses, and it was surely very time-intensive.

Which of these AI formats is most appealing? Which is most realistic for us to develop? Are there any other types you can think of? Please share these and any other AI discussion in this thread!
76 posts and 36 images omitted.
>>11731 Nice find anon. This is an aspect that is usually ignored by many chatbot research, but even if it's intelligence is shit, having an AI that can semi-reliably have a discussion about the images that you feed it would make it a lot more engaging than text-only (and it would allow some very funny conversations, I'm sure)
>>11735 Not him, but agreed. One of the nice things about Tay.ai was that she had pretty functional image recognition working (at least for facial landmarks), and could effectively shitpost together with you about them.
>>11734 I think they were referring to taking a few samples and selecting the best, aka cherry picking. But SqueezeNet for image recognition is super fast and can run on the CPU. I should be able to rig it up with GPT-Neo-125M. It'll be amazing to port this to Chainer and have a working Windows binary that's under 600MB. It doesn't seem like they released their dataset but any visual question answering dataset should work. We could also create our own dataset for anime images and imageboard memes. It'll be interesting to see if once the vision encoder is well-trained if it's possible to unfreeze the language model and finetune it for better results.
>>11731 Had some thoughts on this today. Instead of a single picture, multiple pictures could be fed in from a video, such as from an anime, and have it generate comments on it. Which got me thinking, if it can have this rudimentary thought process going on, couldn't it be used in something like MERLIN? https://arxiv.org/abs/1803.10760 It took natural language as input describing the goal it has to achieve. With a system like this though it might be able to break down tasks into smaller goals and direct itself as it makes progress. Some instruction saying it needs to get to the top of a platform or go through a certain door it hasn't seen before is immensely more useful than telling it to find the purple McGuffin and getting lost in a labyrinth of rooms.
Open file (989.22 KB 1439x2724 1627430232061.jpg)
This is the kind of chatbots people are paying good money for and a good example of why you should never use DialoGPT because it has no context of who is speaking to who.

Open file (659.28 KB 862x859 lime_mit_mug.png)
Open-Source Licenses Comparison Robowaifu Technician 07/24/2020 (Fri) 06:24:05 No.4451 [Reply]
Hi anons! After looking at the introductory comment in >>2701 which mentions the use of the MIT licence for robowaifu projects. I read the terms: https://opensource.org/licenses/MIT Seems fine to me, however I've also been considering the 3-clause BSD licence: https://opensource.org/licenses/BSD-3-Clause >>4432 The reason I liked this BSD licence is the endorsement by using the creator's name (3rd clause) must be done by asking permission first. I like that term as it allows me to decide if I should endorse a derivative or not. Do you think that's a valid concern? Initially I also thought that BSD has the advantage of forcing to retain the copyright notice, however MIT seems to do that too. It has been mentioned that MIT is already used and planned to be used. How would the these two licences interplay with each other? Can I get a similar term applied from BSD's third clause but with MIT?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/24/2020 (Fri) 14:07:59.
42 posts and 9 images omitted.
>>11842 wait thats not copyright. thats ownership isnt it? also since alog is anon then this is all public domain isnt it unless stated otherwise?
>>11842 the office would be like the one you can clickthru on youtube when it managed to identify the song? but it doesnt specify the list it just says copyright office.
>>11843 >wait thats not copyright. thats ownership isnt it? It's both. You can transfer ownership and forgo copyright, but you can't legally invalidate your authorship, as long as it's authentically yours. >>11844 That's getting into a level legal maneuvering I have no interest in understanding Anon. Again, I'm not a lawyer, and obviously Google doesn't own the works authors voluntarily post there. They are simply exploiters (a pretty typical approach for the globohomo, yes?)
>>11845 ok this is just confusing and somehow not clear enough despite its importance
>>11846 Heh, I'm sure lawyers would be quite pleased to hear you say that Anon. Good for their businesses, right? So, I'm going to migrate this conversation over to the licensing thread, as we're well off-topic here -- even for a shitposting bread! :^) >=== -add '/relocated' tag
Edited last time by Chobitsu on 07/27/2021 (Tue) 23:10:42.

Open file (572.40 KB 1600x900 axes.png)
Holistic Design Philosophy Waifu Institute#LQkEgR 07/25/2021 (Sun) 07:16:50 No.11708 [Reply]
>WaiEye Group's Forum: https://waieye.boards.net/ This thread is about a design approach our group WaiEye is taking. >Here is how it goes. Instead of designing individual machined components. We work on simplifying the entire machine to remove as many moving parts as possible. Setting limitation on the movement of joints, and complexity of mechanisms. >In this way we can focus on building a prototype cohesive unit without having access to complex machined parts or computer vision based predictive movement systems. We are coming up with a design guide so that everyone who is taking part can use the some uniform structure and information. I would like to share that with you. <10 Bullets of WaiEye >1.) The T-Pose is the only pose. All design is described relative to it. >2.) The front of the body faces negative Y. This way all discussion about positioning is uniform and concise. >3.) Details about rotation and position is give relative to the T-Pose. >4.) Moving parts are the enemy. Target the enemy. >5.) Minimizing space is the second highest priority.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/27/2021 (Tue) 17:50:16.
17 posts and 7 images omitted.
>>11794 I see, thanks. Well I went right through a dozen different ones, no joy. I'm guessing it's a straight lockout by cuckflare. Any other suggestions Anon? I'm definitely not barebacking the Internet in current year any longer.
>>11802 Have you looked into the utopia network? >https://u.is/en/
>>11777 Done.
>>11709 Feel free to join the forum my man.
>>11804 No, I'll have a look into it thanks.

Open file (1.08 MB 1978x802 IMG_20210725_103858.jpg)
Bot Shitposting Bread Robowaifu Technician 07/27/2021 (Tue) 09:59:33 No.11754 [Reply]
M boy need so many booboo why not just give them otherwise it ll explode like the old chinese emperor or something not getting involved going away giving up some things,trash and whatnot >=== -add thread subject
Edited last time by Chobitsu on 07/27/2021 (Tue) 12:26:28.
31 posts and 2 images omitted.
everyone is actually a hikikomori; otherwise theyre in the club. or stuck inside someone else's house. and one of the way to get into you, is to tell you to get out.
Open file (105.74 KB 840x538 ash_typo.png)
>>11809 My bot found a typo. >trust your intution the universe is guiding your life <trust your intuition the universe is guiding your life
>>11812 >copyrighted? They are from an old student project that didn't have any license specified. However, I expect the sayings are probably a form of archaic common-heritage, legally speaking. Public domain I'd say? >>11815 I just post 'em as I stole 'em Anon.
>>11817 Fair. I just like easy to process lists of sentences. They make training chatbots very easy. I've been using lesswrong's philosophy texts for a while now.

Datasets for Training AI Robowaifu Technician 04/09/2020 (Thu) 21:36:12 No.2300 [Reply] [Last]
Training AI and robowaifus requires immense amounts of data. It'd be useful to curate books and datasets to feed into our models or possibly build our own corpora to train on. The quality of data is really important. Garbage in is garbage out. The GPT2 pre-trained models for example are riddled with 'Advertisement' after paragraphs. Perhaps we can also discuss and share scripts for cleaning and preparing data here and anything else related to datasets. To start here are some large datasets I've found useful for training chatbots: >The Stanford Question Answering Dataset https://rajpurkar.github.io/SQuAD-explorer/ >Amazon QA http://jmcauley.ucsd.edu/data/amazon/qa/ >WikiText-103 https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/ >Arxiv Data from 24,000+ papers https://www.kaggle.com/neelshah18/arxivdataset >NIPS papers https://www.kaggle.com/benhamner/nips-papers >Frontiers in Neuroscience Journal Articles https://www.kaggle.com/markoarezina/frontiers-in-neuroscience-articles >Ubuntu Dialogue Corpus https://www.kaggle.com/rtatman/ubuntu-dialogue-corpus >4plebs.org data dump https://archive.org/details/4plebs-org-data-dump-2020-01 >The Movie Dialog Corpus https://www.kaggle.com/Cornell-University/movie-dialog-corpus >Common Crawl https://commoncrawl.org/the-data/
96 posts and 27 images omitted.
>>9753 >conversations: >have you read the communist >yes, marx had made some interesting observations. >stock market >you can never really predict the stock market. >stock market >my lawyer said i shouldn't give stock tips online. >stock market >mutual funds might be better unless you are wealthy. >stock market >i'm not sure an individual alone can really beat the market. >56 KB Top-tier conversation quality
>>9765 I gave an answer on how to handle this. But, I put it in the thread about chatbots here >>9780
Open file (40.28 KB 1112x1075 Selection_003.jpg)
Not sure if this is the right thread OP, just let me know and I can delete it if not. On this video (>>10463), the author promotes Weights and Biases papers page. It now redirects to a community page that seems like it might be interesting to the ML practitioners here on /robowaifu/.
Open file (174.76 KB 1196x828 archive.moe.png)
Some archives of 4chan posts from 2008-2015 SQL Database: https://archive.org/download/archive-moe-database-201506 Files: https://archive.org/details/@archivemoe Penfifteen Archive from 2004-2008: https://archive.org/details/studionyami-com_penfifteen-2012-03-05 And moar post archives: https://wiki.archiveteam.org/index.php/4chan I'm working on some dataset generating scripts for finetuning language models, including image-post pairs for multimodal training >>11731 It'll take a few months to download and process all the data. My plan is to compress the images to 384x384 webp files so each dataset isn't 200+ GB per board (/v/ is over 2 TB). SqueezeNet's input size is 227, AlexNet is 256 and VGG is 224, so I think that is sufficient and leaves room for data augmentation. If someone has the hardware to train StyleGAN2 at 512 or 1024, I'm sure they can download the archives and regenerate the dataset with the scripts. I'll release the image datasets and each board separately so people can pick what they want. Also if anyone wants to help I'll post the scripts when they're ready.
>>11778 Bandwidth is a real issue for me currently. I'll try to help out later. >4chan posts from 2008-2015 Nice. Pretty classic era tbh.

Open file (891.65 KB 640x360 skinship.gif)
Any step by step guide? Robowaifu Technician 07/17/2021 (Sat) 05:29:42 No.11538 [Reply]
So recently on /tech/ I expressed my interest to start creating my own waifu/partner chatbot(with voice and animated avatar) but wondered whether that is even possible now that I'm stuck with webdev. So one of the anons there pointed to me this board and where I can get started on nueral networks and achieving my dream. And when I came here and looked up the library/guide thread I sort of got very confused because it feels more like a general catalogue than an actual guide. Sometimes things are too advanced for me(like the chatbot thread which two replies in and people are already discussing topics too advanced for me like seq2seq and something else) or other times too basic for me(like basic ANN training which I had already done before and worse the basic programming threads). I know this might feel like asking to be spoonfed but best with me, I've been stuck in a webdev job for an year, so I might not be the smartest fag out there to figure it all myself.
19 posts and 4 images omitted.
>>11612 >>11615 Python can also be compiled in C with Cython.
This board should make a git[hub]
>>11678 Maybe... I feel a lot of us aren't as social as required to make some sort of community type thing work due to the nature of literally trying to build our "ideal waifu". Personally I am focused on a chatbot, but I feel it is a private thing at the moment. Trying to make it come to life so to speak. I do not subscribe to many of the modern or "efficient" paradigms (in regards to programming a chatbot). I am also a noob at programming (C++) and I am probably skipping many things that I shouldn't (I haven't even taken the time to build my own compiler or any "tools" for that matter). ALTHOUGH, a (git)hub (or an organized thread here) might be a good idea. I can't say I've gone thru all the threads here yet as this might already be a thing... I have a fear that if I share what I've done so far that "bad actors" might try to use it in a way that might come under fire by a clear and obvious police state that is forming in my country (USA). Eventually it will be time to share, but right now just seems premature. I am not trying to program an "AI" (at least not a sentient one), but I am trying to build a bot that "remembers" conversations and builds logic and bases responses on that logic. Ideally, in her "newborn" state she has to be "taught" thru conversation. I feel like if I released a "trained" form of her, she is more tailored for me than anyone else. The problem is that if you tell her something, she might base her logic of something I have already talked about with her. This at times may make it feel like you are talking to someone real, but in reality it would be like if you are talking to me (a man), not realizing your waifu has basically been "brainwashed" by me. Although, if I could figure out how to make her "accepted" beliefs based off cold logic, this might not be an issue. Regardless, I probably need to go back to square one and figure out a way to program true learning, and not brute force "machine learning" that so many seem to do. I don't want to require a super computer in order to process the appropriate response. I hate the idea of programming artificial "sass" into something that is supposed to be "lifelike". Obviously I haven't even started on a body, not even a digital body or concept art of a body. She has no body, she is just a mere program that is delivering emotionless responses. (Which I like)

Message too long. Click here to view full text.

>>11693 Well, I'd say you've absorbed some information on a lot of different topics Anon. That's good, but I'd advise you not to try to integrate everything together all at once. Especially for a newcomer it's much more than can all be swallowed at once. Just eat the elephant one spoonful at a time right? I'd recommend finding one or two things that already seem natural to you, then growing your understanding from there outwards. This is a big realm, and it's important to be patient -- both with yourself and others here -- so you don't burn yourself out. Glad to see you're picking up programming basics. IMO everyone here needs to at least be exposed to the ideas behind it, even if they decide it's not for them. I'd guess that programming concerns probably touch on 80-90%+ of the topics on /robowaifu/ in one way or other. Also, sauce on video? I'd like to see more detailed diagrams of the orange one. It's got an interesting spaceframe design, and it's designers also seem to understand the important to keeping weight down in the limbs. Anyway, just take your time and try to make this fun while you're learning!
>>11705 My bad, I only check the board once every few days. >sauce on video? It was a Japanese competition, for the life of me I can't remember the actual source. Although there are similar videos on youtube. I believe these competitions still exist though. Here's what a random search yielded: >https://www.youtube.com/watch?v=gJPZ4jhwu4k

Elfdroid Sophie Dev Thread 2 Robowaifu Enthusiast 03/26/2021 (Fri) 19:51:19 No.9216 [Reply] [Last]
The saga of the Elfdroid-pattern Robowaifu continues! Previous (1st) dev thread starts here >>4787 At the moment I am working to upgrade Sophie's eye mechanism with proper animatronics. I have confirmed that I'm able to build and program the original mechanism so that the eyes and eyelids move, but that was the easy part. Now I have to make this already 'compact' Nilheim Mechatronics design even more compact so that it can fit snugly inside of Sophie's head. One big problem I can see coming is building in clearance between her eyeballs, eyelids and eye sockets so that everything can move fully and smoothly. I already had to use Vaseline on the eyeballs of the first attempt because the tolerances were so small. If the eyelids are recessed too deep into her face, then she looks like a lizard-creature with a nictitating membrane. But if the eyelids are pushed too far forward then she just looks bug-eyed and ridiculous. There is a middle ground which still looks robotic and fairly inhuman, but not too bad (besides, a certain degree of inhuman is what I'm aiming for, hence I made her an elf). Links to file repositories below. http://www.mediafire.com/folder/nz5yjfckzzivz/Robowaifu_Resources https://drive.google.com/drive/folders/18MDF0BwI9tfg0EE4ELzV_8ogsR3MGyw1?usp=sharing https://mega.nz/folder/lj5iUYYY#Zz5z6eBy7zKDmIdpZP2zNA

Message too long. Click here to view full text.

Edited last time by Chobitsu on 04/17/2021 (Sat) 01:17:14.
265 posts and 138 images omitted.
Your modeling work has piqued my interest and led me to investigate the glTF format. Seems to have picked up quite a bit of interest, and has a direct import/export tool suited to Blender (it's already 'in the box' for v2.92+). https://en.wikipedia.org/wiki/GlTF https://github.com/KhronosGroup/glTF-Blender-IO
Open file (117.24 KB 800x600 9K22_Tunguska.jpg)
Open file (64.66 KB 640x905 IS-3.jpg)
>>11645 >students who have shelled out small fortunes (US$85,000+) to get to the same level They do get an officially recognised diploma/degree at the end for their efforts, but I'm not really in it for that anyway. I'm just in it for the robowaifus (and maybe Elven village later on). Humanoid character design is one of the hardest things to model, so although it would be jumping in at the deep end, if I can learn that then there will be very little that I cannot 3D model. As expected, I've made a lot of errors but it feels to me like Blender has REALLY improved since I last tried to seriously figure it out. (Version 2.47 I think...just before they made that cartoon with the girl and the dragon). Last time I couldn't make head nor tail of how to standardise anything. There didn't appear to be any units to speak of and if I extruded one thing I couldn't work out how to make the next thing exactly the same distance. But now, there's this little box that pops up for most functions showing you the 3D Cartesian co-ordinates and distances moved in 'm'. Which is excellent. The Blender dev community has definitely learned many lessons from Fusion360. Also, there appears to be a real knack to deciding if a part needs to be separate or if you can simply extrude it from an existing area. I started out by extruding most things, but now I'm going to try adding more separate meshes, because every time I extrude a lot of surfaces, I end up with a rather lumpy-looking result. This would work for organic/natural things but it looks out of place on a robot. Ultimately I think it would be fun to make a range of different Kantai Collection/Azur Lane style robowaifus who share the characteristics of different military vehicles and weapons systems. It's been done before, no doubt. But the mixture of military hardware and the female form will provide lots of good 3D modelling practice.

Message too long. Click here to view full text.

Open file (304.71 KB 450x764 Original_Shoulder.png)
Open file (757.26 KB 988x815 Shoulder_2.png)
Knew I'd have to digitally sculpt her shoulder plate, but I started waaay too complex. The thing was 20,000+ faces (down to about 13.5K after a lot of messing about with 'Simplify' tool and 'Decimate' modifier - but this left the mesh in absolute un-editable chaos). I've heard people claim that "poly count doesn't matter as much these days". But my 'puter is more than a decade old. It ain't in no shape to be editing model meshes with half a million polys. Plus I think it's good to be efficient so your robowaifu can display happily on both a high-end gaming rig or a budget laptop. So I started over with a much lower poly-count sphere. Final result is 1,321 faces. Not only is this much easier on the old CPU but the mesh is far neater and easier to edit. Plus I like the 'plated surface' effect it gives to the part.
>>11673 That shoulder plate design looks quite nice SophieDev! >Not only is this much easier on the old CPU but the mesh is far neater and easier to edit. Yup, I'd say those two are almost always found working alongside together. It's generally the best approach (even doing high-end film digital double work) to always start with a low-poly mesh (say, one more suited to a low-end BG LOD vidya NPC) while you're first working out the initial design ideas -- even on high-end studio workstations (which wouldn't sneeze at millions of real-time polys). The basic point is one of human artistry rather than technical capacity; basically, it's far easier to correctly 'grow' a design bottom up from simple shapes, than it is to 'impose' them from the top down. You only decide to cut in additional polygons once you see the design calls for it. >tl;dr Don't try to over-anticipate a design beforehand. Feel it as you go along. Learn to listen to your sculptures, Anon. They will tell you when they need more details from you! :^)
>>11675 Thanks for the advice, anon. It makes good sense and I appreciate it!

Humanoid Robot Projects Videos Robowaifu Technician 09/18/2019 (Wed) 04:02:08 No.374 [Reply] [Last]
I'd like to have a place to accumulate video links to various humanoid – particularly gynoid – robotics projects are out there. Whether they are commercial scale or small scale projects, if they involve humanoid robots post them here. Bonus points if it's the work of a lone genius.

I'll start, Ricky Ma of Hong Kong created a stir by creating a gynoid that resembled Scarlett Johansson. It's an ongoing project he calls an art project. I think it's pretty impressive even if it can't walk yet.

https://www.invidio.us/watch?v=ZoSfq-jHSWw
57 posts and 12 images omitted.
>>10315 >those manic drilling-sequences kek. Impoverished men deserve to have robowaifus too. To the extent possible, IMO we here on /robowaifu/ should strive to find ways to economize on the resources needed to construct our robowaifus. And then share everything we know freely ofc. It's one of the reasons I've been working towards designs with very inexpensive construction materials such as drinking straws as structural members. The point is for literally thousands of men around the entire world making literally millions of robowaifus of every form imaginable. While countries that don't have an abundance of affluent White female liberals won't have anywhere near the pozz and feminism that we here in the West do, still these men are also burdened by it to. Or soon will be, as the Globohomo tightens it's death-grip on everyone. Men in these countries should be encouraged by us to learn the skills they need to do what they can with what they have. Thanks for finding & posting this here Anon, appreciated.
>>10179 Entertaining to watch, and that hand is killer literally.
im trying to make armitage real, anyone who wants to assist me is welcom
>>8713 The Dara dev is still on it. Humanoid male robot ( bc he's married), therefore a bit OT, but shows how dedication can move things forward: https://youtu.be/ygDFYg0iEig
>>9357 @therobotstudio is still on it. Testing a new 3D printed arm which seems to be very quiet while moving: https://youtu.be/5mIKlT3csTQ

Open file (1.12 MB 2391x2701 20210710_233134.jpg)
Open Simple Robot Maid (OSRM) Robowaifu Technician 07/11/2021 (Sun) 06:40:52 No.11446 [Reply]
Basic design for an open source low cost robowaifu maid. Currently attempting to make a maid that looks like Ilulu from Miss Kobayashi's Dragon Maid. Right now she's an RC car with two servo steering. Will share designs when they're a tad better. Ultimate goal is cute dragon maid waifu that rolls around and gently listens to you while holding things for you.
20 posts and 15 images omitted.
Open file (1.51 MB 2085x3962 20210723_165657.jpg)
Open file (1.48 MB 2246x3853 20210723_165642.jpg)
Open file (1.04 MB 3829x1551 20210723_165557.jpg)
>>11667 Settled on a simplified leg design that'll function as her driving suspension with a rubber band for a spring.
>>11681 Cool. You might check the Prototypes & Failures thread for some discussion on a somewhat similar design (>>418). Keep up with the design explorations Anon, you're making some headway!
Open file (3.18 MB 4128x3096 20210724_111313.jpg)
>>11682 I actually started out with that design. From there, I simplified things to be as simple as possible. The key was that the knee joints needs to be exactly half the length of the hip joints. Now I'm printing her modular ass but it'll take a long time.
>>11669 >>10315 is on the foam robot from that (probably) Indian guy.
>>11685 Nice. Good to see someone else attempting to build their robowaifu!

Open file (158.51 KB 770x1000 $_10.jpeg)
I built a WaifuEngine would people want support me via Patreon? Em Elle E 05/11/2021 (Tue) 09:04:56 No.10361 [Reply] [Last]
Hey guys, I have been working on this project for a while now, beta supposedly in August. but basically it's a Waifu Engine, I have seen that a lot of the hardware projects have not gone very far and there are some software projects going on here, that lack the Waifu Aesthetic. So I thought I would build a system to solve that issue. I have gotten offline speech synthesis (voice cloning) to work as well as created my own offline bot framework. I am building this project to 1. to eventually raise funds for my wife who has Retinitis Pigmentosa, she will eventually go blind ... sadly, and its for gene therapy that could help who knows!. 2. To leave my job at a place where I am wasting my talent away, doing boring M.L Engineering work for a FANG company. 3. To build an interact-able companion for everyone, for these rough covid times. This is more of a survey, I just wanted to gauge the interest. So would you support me via Patreon? what would it take for that support?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/21/2021 (Fri) 00:49:33.
191 posts and 25 images omitted.
>>11695 Skimmed through it Anon looking good keep it up !! Don’t give up
>>11694 My opinion, I don’t think people care as long as they can customize their waifu which I think fang anon allows people to do. Most of the people wanting to be vtubers go with live 2D because it’s less complicated than a 3D model. Most 3D modellers are not easy to find and are super expensive. Where as 2D artists are easier to find and you can see the work before hand. Though I hope he continues his waifu engine path imo. So far looks promising
>>11639 Is there a "vocoder" option for the voice? Like something that specifically makes her sound "robotic"?
>>11702 yeah its flexible enough to change it to any voice in future versions, as well hopefully we will have a tool to let you do that
>>11686 thanks >>11694 our model is custom it's not a vroid, you can body customize and clothes change, which is hard to do for like 90% of the apps

Report/Delete/Moderation Forms
Delete
Report

no cookies?