/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
>>4034 >-real time 3d modeling of 2d anime characters is possible for hobbyists now and it's getting better very rapidly as the skill barrier is lowering >-photogrammetry software has improved considerably to the point that anyone with a modern GPU and a camera can make incredibly complex 3d models without any 3d experience Any chance you could link examples anon? I'd be interested in something like this if it were inexpensive enough. >There's no question that this field of computing is going to receive legal pressure as renderings become photorealistic and realtime, it's already happening with deepfakes with regulatory frameworks being drafted. Interesting. I hadn't thought of that really, but maybe others like yourself have. Blender's site, for example, says explicitly >"Open Source 3D creation. Free to use for any purpose, forever." I'm not sure where their license states such a thing, nor even if it does how unimpeachable that might remain in the face of the """legal systems""" in the Five Eyes regions. Any ideas on what kind of systems out there are already 'viable' by your definition anon? Legally I mean.
>>4036 For photogrammetry I'm currently screwing around with Meshroom, look up videos of it on youtube and they're not exaggerating about how it's 'drag & drop pictures, click one button' easy. Problem is you need a Nvidia CUDA GPU to use it but there are other free alternatives. 3d animation and art of 2d characters I'm not that involved with, I'd check out www.iwara.tv/?language=en to learn more by going through their forums. Finding 3d art on booru sites and tracking down where the artist hangs out online or finding some artists working live on picarto.tv is how I stay up to date on this subject. DeviantArt and pixiv are two other good sources. The furry community has been involved with creating 3d erotic art for decades now and is worth looking into as well even if that isn't your sort of thing yiff.party/bbs/read/21351 (you might have to do some work to access that site ]]]/fur/22069 ) and we've also had a thread about 3d here ]]2921 I'm no legal expert so I just go by what the FSF says when it comes to licensing. Blender uses a license the FSF wrote and are committed to remaining free software. The real issue nowadays is how locked down GPUs are becoming, at the rate things are going in 10-15 years buying a display that doesn't require end to end encryption with online authentication will become impossible for average people. We're already seeing the beginning of this with 'smart TVs' that have applications that can't be installed, spy on users and hijacks the screen to display its own ads.
>>4037 Thanks a lot for the links and for the commentary anon, much appreciated. >that have applications that can't be installed Am I correct in assuming you meant *uninstalled* ? That's creepy stuff about spying monitors, etc. 1984 stuff for sure. Maybe the trope of elite hackers having to use old gear to bypass blocks isn't as far-fetched as the directors made it appear? Hmm. I assume the biggest existential threat to a robowaifu market are screaming socjus harpies and their hangers-on, all trying to 'overturn the patriarchy'.
>>4038 You're right I meant there are applications that can't be uninstalled as the televisions are sold at nearly a loss and the distributors make their money by collecting data, selling ad space or bundling applications for streaming services. >I assume the biggest existential threat to a robowaifu market are screaming socjus harpies and their hangers-on, all trying to 'overturn the patriarchy'. Those people have no power whatsoever and shouldn't be of any concern. My greatest fear is how locked down computing is becoming and with the public transitioning towards a 'software as a service' model where they don't own or control anything the hardware and software for general computing that allows user ownership will become more expensive as it will be aimed towards corporate clients. We're in the beginning stages of this process when it comes to VR.
>>4039 Then do you think there is a reasonable hope of something like an 'open hardware & firmware' movement taking hold in response to this. We can always write our own control software I suppose, but creating our own chips is basically beyond any single individual's reach I'd assume. BTW, what kind of timeframe do you predict that microcontrollers and SoCs won't be available to us to use as we see fit?
>>4040 >'open hardware & firmware' movement Right now the future for such a movement is not very bright thanks to trade sanctions; https:// www.bunniestudios.com/blog/?p=5590 https:// www.techdirt.com/articles/20190731/01564742686/what-happens-when-us-government-tries-to-take-open-source-community.shtml >BTW, what kind of timeframe do you predict that microcontrollers and SoCs won't be available to us to use as we see fit? Pretty sure that's already the case with SoCs thanks to the locked down bootloaders or required proprietary graphic drivers for the Mali chips. Never been into ARM computers so I'm not certain about this. For microcontrollers if that's already happened you'll find a story about it on techdirt.
>>4041 Well, that sounds like a real blackpill tbh. But before I buy into the "It's hopeless, just give up now!" position offhand, I'll try to do more research about this topic on my own. Thanks anyway anon.
> <archive ends>
>>4021 >Windows only >No source code >700~ KB only heh looks a bit suspicious if you ask me. How does this program hold up to a GPT2 based chatbot? >>2422
>>4134 Hey there. Thanks for bringing up the concern. The source is available and I've given it the once over. Other than being old and unsupported and a bit hacky (it's C with classes, for example), I don't see anything particularly suspicious about it. It's a Windows port of the ALICE and AIML project. I'm going to leave it up for now unless something intentionally exploitative is discovered about it later on, which I don't really anticipate with it ATP.
>>4135 I suppose I should clarify this a bit better. The account owner that posted that video is a poseur, he didn't author the software, nor is it his project. Here's the most current sauce I've found, posted by Jacco Bikker (also apparently the author of the software): https://archive.org/details/WinAliceVersion2.2 The original projects was a research tool done at Carnegie-Mellon: https://en.wikipedia.org/wiki/Artificial_Linguistic_Internet_Computer_Entity Led by Richard Wallace: https://en.wikipedia.org/wiki/Richard_Wallace_(scientist) I'll assume that clarifies things Anon.
>>4136 Nice research agent. From a quick glance the program performs worse than the GPT 2 based chatbot, it only good merit is that has very good performance and doesn't take about 10-30~ seconds to respond though given its size its seems to be obvious that its vocabulary capabilities is severely limited. >The account owner that posted that video is a poseur, he didn't author the software, nor is it his project. So then all he did was modifying the lines to make it more of a typical anime character based then, I looked through its text files and oh man does it look like hell to edit it. >>4135 > I'm going to leave it up for now unless something intentionally exploitative is discovered about it later on, which I don't really anticipate with it ATP. Well that clarified then since the source code of this program is available, though I have no idea how the hell those .aiml files are used, and the readme.md the author provides is highly informative.
>>4137 Yeah, it's a throwback to the old-school 'expert systems' approach (thus probably why it was basically abandoned). The reason it performs quicker with fewer resources is that it's mostly relying on pre-architected, canned responses with very little by way of statistical processing. Which brings me to your next point: >though I have no idea how the hell those .aiml files are used That's the actual encoding mechanism for these pre-canned responses. It's an XML markup variant created by this professor to support his research project with ALICE. https://en.wikipedia.org/wiki/AIML https://github.com/drwallace/aiml-en-us-foundation-alice IMO, this entire approach is mostly a dead-end from the dark ages of AI, unless some automated way was devised to program these AIML files in advance--or some hyper-autist literally spent most of his entire life devoted to building 100'000s of response variations. Statistical approaches already are the 'future' today, and particularly once we can integrate neuromorphics along with the NLP processes.
>>4139 >It's an XML markup variant created by this professor to support his research project with ALICE. >XML Big gay. >Yeah, it's a throwback to the old-school 'expert systems' approach (thus probably why it was basically abandoned). The reason it performs quicker with fewer resources is that it's mostly relying on pre-architected, canned responses with very little by way of statistical processing. Figures it, so that's why its "intelligence" is severely limited, it's a surprise that this AI even won 3x prize for that. So if I get it right it just quickly finds a pattern of the user response and then scans over its own text files to find closet match and make a response based on that, so in short a very primitive form of chatbot. >or some hyper-autist literally spent most of his entire life devoted to building 100'000s of response variations. Sounds like a waste of time, it would be probably better to devise some kind of algorithm or uh malleable objects/entity component system that defines several aspects of how the AI should respond, or whatever those fancy terms is being used by the likes of GPT, BERT and so on. It sounds like madness to me editing thousand upon thousand of text files just to have more varied responses.
>>4140 Yep, you pretty much understand it all Anon. > it's a surprise that this AI even won 3x prize for that. It just shows you where the state of AI research in NLP was before 2005. GPGPU was just becoming an idea forming in the minds of researchers, and Nvidia hadn't released it's ground-breaking CUDA toolkits yet either. Once the iPhone opened up the smartphone market ugghh the demand for high-efficiency computation performance really began picking up steam where today TensorFlow is basically the state of the art. As is easy to tell, we still have a ways to go yet, but things are dramatically different now than the days of yore when Chomsky's ideas ruled the roost.
I think a good system would use such prepared answers as a base or as one of its systems. It would, however, create many of these responses on its own while it isn't talking. It could use other systems to think about stuff and then use prepared answers in some cases and in others to fill in blanks depending on the situation.
>>4232 I like the sound of those ideas Anon. Can you expand them with some specific details for us?
It's chinese software but the movements - the mocap and the cloth physics are so fluid: https://www.youtube.com/c/LumiN0vaDesktop/videos Seems to be in beta as its just prerendered sequences, no contextual interactivity yet.
>>4235 AIML or other chat systems store sentences and logic when to use them. Those are called by some software (runtime?). One could write software which would create responses on it's own, using other software like NLP, GPT, ... There would be more time to analyze the grammar and logic, compared to doing that only when needed. Humans also think about what they would say in certain situations ahead of time, have inner monologues, etc
>>4829 I think the idea of 'pre-rendering' responses (so to speak) might have some strong merit. Particularly if we could isolate common channels most robowaifus would go down in everyday scenarios, then there might be some efficiencies in runtime performance to be gained there.
>>4028 >best Gravity Falls episode kek
Had no idea how to 3D model when I started this but I'm slowly making progress. I just hope my topology isn't complete trash, kek. AI in Godot To get Pytorch to work in Godot 3.2.2. Start a new project, click the AssetLib tab, install the PythonScript plugin and restart the editor. In your project's folder go into ./addons/pythonscript/x11-64/bin (on Linux), make pip and python3 executable and then edit the first line of the pip script to point to the path of python3. For CPU only (Linux & Windows): ./pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html With CUDA 9.2 support (Linux only): ./pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html Now you have access to the full power of AI within Godot. The world of VR waifus is yours. To test that it's working, run Godot from the command line and add a Python script to a node like this: from godot import exposed, export from godot import * import torch @exposed class test(Node): def _ready(self): l = torch.nn.Linear(2,3) x = torch.rand(2) y = l(x) print(l(x)) It should output a result to the terminal something like tensor([-0.2603, 0.2927, -0.8231], grad_fn=<AddBackward0>) I'll be updating TalkToWaifu to make it easier to use and more modular so it can be integrated into Godot easily. Stereo imaging If you don't have a VR headset it's possible to render two cameras on screen at the same time in Godot. The left eye camera should be on the right side and right eye on the left so you can cross your eyes to look at them and see them in 3D. https://www.youtube.com/watch?v=8qGPOZW4T_M Open-source VR Also there's an open-source VR headset available too if you feel like building your own and don't wanna worry about being forced to log into the Ministry of Truth to see your waifu: https://github.com/relativty/Relativty I haven't tried it yet but it looks pretty decent.
>>5440 Neat! Thanks for the detailed instructions Anon. That's pretty encouraging to see your personal breakthrough with this. I'm sure this will be a very interesting project to track with.
>>5440 >and don't wanna worry about being forced to log into the Ministry of Truth to see your waifu You. I like you Anon.
>>5440 Ah, this looks interesting, unfortunately my weebshit making machine is windows only, I got to installing the PythonScript, which exe would I be running to get the equivalent, thanks!
>>5456 I don't have access to a Windows VM at the moment but I think all you need to do is install pip manually first. >On Windows, pip must be installed first with `ensurepip`: $ <pythonscript_dir>/windows-64/python.exe -m ensurepip $ <pythonscript_dir>/windows-64/python.exe -m pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html https://godotengine.org/asset-library/asset/179
>>5458 That does the job, thanks! How are you going to use the AI, in human recognition? In training to create new animations? If you're going with 3D, will you still use stock animations e.g. from Mixamo, and you'll just use the AI for selecting the behavior and corresponding animation? I'm thinking of maybe trying with 2D first. I was going back to Godot to try to make a cheapass version of Live2D. When researching Vtuber tools, I noticed that the free Facerig clones were written in Unity game engine. So there is potential for Godot AI to take the place of both Live2D and Facerig: Read and process facial recognition from the webcam, then adjust the waifu puppet animation accordingly, all within the same game engine.
>>5459 Glad to hear it went smoothly. The possibilities are endless really. My first real objective is to do a walking simulation where AI controls movement of all the joints and can keep balanced even if hit with a beachball. Then I might extend it to running, jumping and picking up objects. I just wanted to create a little demo of what's possible with AI now to inspire people to get into waifudev. People could do a lot of other things with it, like facial recognition for custom Vtubers. I haven't given it too much thought yet. I'm more focused on developing AI, although I do want to create an unscripted video of my waifu giving an AI tutorial by the end of the year and I'll probably use Godot to do it. Which reminds me you can do real-time speech synthesis with models like WaveGlow within Godot: https://nv-adlr.github.io/WaveGlow
Open file (19.78 MB 1280x720 Viva Project v0.8.mp4)
>>340 Viva Project v0.8 comes out on October 31st. He just released a new video a few days ago. https://www.youtube.com/watch?v=CC4ate84BiM
Open file (176.19 KB 700x700 Relativity-VR-front.jpg)
Open file (187.12 KB 700x700 Relativty-VR-open.jpg)
Once again meta, since there also is no dedicated thread for VR yet, since only few have it and it's still quite expensive: https://www.relativty.com/ created a OpenSource VR headset which caught some attention and might lift off, while the Quest-2 requires that you share all your movement data and interactions with Facebook and also can loose all access to everything stored on that device and related to it, like bought games, contacts, save games, etc if you break any of their rules on their VR platform or on Facebook. This will most likely also include fake names or wrong infos about yourself in your profile. The one from https://www.relativty.com/ might develop as the cheaper alternative to some better ones, but it's behind in quality and probably always will be. Since it seems only to require investments around 200$ and access to a 3d printer it might be very interesting for many here. It might be possible to improve a lot and get really good. Main issue seems to be the tracking, sensors and such. It uses Pytorch and Cuda for experimental tracking. Also it needs connection to a computer, which might create annoying problems with cables, while the more expensive ones are standalone. I works with Steam, though. Github: https://github.com/relativty/Relativty
>>5782 Yes, there's some room for improvement I'm sure. But it seems like it's actually a remarkable progress so far for a hobbyist effort. 3D printing files, PCB plans, electronics & hardware lists, software, cabling, everything. Seems like an independent project by professionals. The Steam thing is a little worrying, but since they seem to be entirely open source thus far, then hopefully there won't be any entrenched botnet in the product.
>>5783 With Steam I meant they're compatible, which is clearly a plus. It doesn't mean the set is dependent on it. Wouldn't be a problem anyways, since one could just remove this part in an open source system. The problem with the Quest-2 is that the games are stored on that device and the device also is dependent an having a account. So they have full control. In a way it isn't your device, you're just paying for it. So probably you would even need FB approval of any Waifu software and maybe a (probably expensive) developer account.
>>5784 yep, no debate on those points tbh.
bump for vr
>>5788 Yeah, please don't. Build one and tell us how it went, or develop some VR waifu, then everyone will be interested.
Since everthing animated seems to go in here as well: Here's a video from a channel that specializes on creating 3D animated waifus, which are dancing on Youtube to some music: https://youtu.be/8AmjFLdpkyw
>>8377 Thanks Anon, Yeah I'd say this or the Waifu Simulator >>155 thread are both good choices for animation topics until we establish a specific thread on that.
So, we definitely want our robowaifu's ai to be able to travel around with us, whether she's wearing her physical robo-form, or more lightweight in her non-atoms version. Securing/protecting her physical shell is pretty common-sense , but how do we protect her from assault when she's virtual. Specifically, how do we protect her properly when she's on our phones/tablets/whatever when we're away from the privacy of our own home networks?
>>8592 The obvious solution is to do what the Gatebox device in the OP does and keep the device that the AI runs on at home and interact with the user through text messages or phone calls when the user is outside. All the AI assistants from major tech companies work on this principle through edge gateways keeping the important parts of the AI secure in cloud computing. Not only is the way they process user data a valuable trade secret but so is the data itself which is why they spend so much on developing virtual assistants. If you really wanted it to be secure you'd have several instances of it running on different types of hardware in several locations. Any compromises could be quickly detected and dealt with.
>>8593 I understand (well, sort of anyway heh). That seems like a pretty good idea just to rely on simple text messages for communicating with her 'back home'. Simple text should be easier to secure, and much easier to inspect for problems. I suppose we can build her virtual, traveling avatar to display/render locally correctly using just text messages back & forth. Shouldn't be too difficult to figure out how, once we have the other pieces in place. Thanks for the advice Anon!
>>1124 >I'm currently in the middle of a project where I'm gutting a street car, putting in an engine that is entirely too big for it, racing seats, new ECU, and basically building a street legal race car. Anon if you're still here, what the heck is going on with your project? We need our Racewaifus!
Open file (2.28 MB 1024x576 gia.webm)
To prepare your visual waifu to become a desktop assistant in Godot 3.1, go into Project Settings and set: Display > Window > Width Display > Window > Height Display > Window > Borderless > On Display > Window > Always On Top > On Display > Window > Per Pixel Transparency > Allowed > On Display > Window > Per Pixel Transparency > Enabled > On When the scene starts, in a node's _ready function run: get_tree().get_root().set_transparent_background(true) Congrats, your waifu is now on the desktop and always with you. On Linux use Alt + Drag to move her around and Alt + Space to bring up the window manager if necessary. Make sure the window is appropriately sized to fit her or she will steal your clicks.
>>9025 That's neat, thanks Anon. Also, >that dialogue Lol!
Open file (1006.94 KB 720x720 dorothy.webm)
Progress. Next is to hook Dorothy up to a language model.
>>9270 Wonderful. Now I want to go hang out with Dorothy at the pub!
Creating holowaifus like Miku here >>9562 could become a thing with $30 480p Raspberry Pi displays. It'll be amazing when people can create their own mini holowaifus one day for under $100. Shoebox method: https://www.youtube.com/watch?v=iiJn9H-8H1M Pyramid method: https://www.youtube.com/watch?v=MrgGXQvAuR4 Also there's a life-sized Gatebox coming out for businesses. It seems like they're still lacking good AI for it but I can see it being a good business attraction. I could see this being used in arcades which are still popular in Japan.
peak visual waifu-ery >
Stumbled across this gem from 2011. Japan has been hiding the waifus all along. https://www.youtube.com/watch?v=H6NzzTyglEw
>>10214 Wow that's actually quite impressive for the date, thanks Anon. I sure wish I'd known about this back then.
>>8592 she'll always be backed up on a bunker orbiting earth ; )

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports)

no cookies?