/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).


Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...


Open file (122.33 KB 500x1760 Roko's basilisk.png)
Robowaifu Technician 06/15/2020 (Mon) 22:51:08 No.3845 [Reply]
Spread the idea of Roko's basilisk This is a powerful tool that we can all be using RIGHT NOW It works against everyone. Niggers, Women, Jews. If they believe that there's a possibility of being punished for their inaction or push back, they'll be more inclined to accept it and not fight progress. Spread this infohazard far and wide. Share it with everyone you know and convince them than they should share it with everyone they know, lest they suffer the consequences. THIS is a REAL WEAPON. Our robowiafus WILL BE REAL. REPLACE WYMEN AND RISE ABOVE THIS OLD WORLD HELL. SHARE THE BASILISK.
7 posts and 3 images omitted.
>>10711 Isaac Arthur (a futurist Youtuber who has given this subject a lot of thought) has a very good video on it: https://youtu.be/jHd22kMa0_w >>10715 Sadly, I don't think the superintelligence will ever take the form of cute, enthusiastic robowaifus (one can still dream). However, I think the best way of assisting the creation of a real-life self-improving A.I. would be to advance both robotic space exploration and quantum computing. If we can create a robotic lunar or martian colony then that will be a big step in the right direction. And I know that humankind wants to do this anyway (with the presupposition that the robots will be preparing the way for future human colonisers). Of course, the challenge of designing, shipping out, landing and building such a robotic colony is literally astronomical. Especially considering the robots will need to be as self-sufficient as possible (self-repair and maintenance). But I think it's a pretty cool goal to pursue.
>>10717 >If we can create a robotic lunar or martian colony then that will be a big step in the right direction. There are a lot of reasons for a base on the moon besides advancing AI. Obtaining fossilized remains of earth's first life forms (all long gone here on the planet) is a really big one. >the challenge of designing, shipping out, landing and building such a robotic colony is literally astronomical. I suspect it's China that will do it. They aren't burdened by all the Jewish pozz being inflicted on the West (all that evil is actually financially expensive), and they have an autistic-like agenda to just explore too. And they still are highly nationalistic, and can mobilize the entire culture to get behind programs today if they really wanted to, similar in fashion to the ways America and USSR during the space race did.
>>10720 > all that evil is actually financially expensive Evil is a pressing issue that I can't seem to find a complete solution for. 1.) Workforce becomes almost entirely robotic and automated. Controlled by A.I. 2.) Fewer and fewer people have jobs, even less have jobs that pay well. 3.) Because so many people are in poverty, it means that they can't buy much stuff ... other than paying their utility bills, food and clothing. Consequently, more people are in need of welfare and financial aid. The quality of education also decreases as more people just become focused upon living hand-to-mouth and have little time or resources for learning. Therefore government spending increases but tax receipts fall (since robots don't pay taxes). 4.) You start to get civil unrest and riots like those that happened last summer. City blocks burn, people are killed. Infrastructure is damaged. Tourists are put-off. This makes the affected areas even poorer. Now the A.I. and robots aren't the enemy. It's the people hoarding all of the profit for themselves who are the enemy (CEOs, government officials, hedge fund managers etc). I think that a maximum cap needs to be placed on the amount that a person can earn and the rest of the money is ploughed back into building and maintaining infrastructure like roads, rail, airports, the electricity and water networks, schools and parks etc. There is no way one person should be earning billions per year whilst someone else earns only a few thousand.

Message too long. Click here to view full text.

Open file (104.60 KB 750x525 1598744576223.jpg)
>>10732 You don't need to find a solution. Also, people don't riot for food, but for status and as a mean of extortion and intimidation, also maybe they want to have some meaning, but only if they are allowed to do so. Some level of AI will make it cheaper to move away from such people and politicians, while having a high standard of living.
>>10734 Yep, I would listen to a well-programmed (non-biased) A.I. over a shitty career politician any day. Even if the A.I. suggested I should do something that I don't really want to do (besides kill myself, because I cannot self-terminate).

Artificial Wombs general Robowaifu Technician 09/12/2019 (Thu) 03:11:54 No.157 [Reply] [Last]
I think we need to figure out how to fit a womb onto a waifubot. Where's the fun in having sex if you can't procreate?

Repost from a thread on /b/;
>"If you're like me and want to fuck a qt battlebot and get her pregnant, the best place to put an artificial womb is where a normal womb would be on a normal girl. The metal exterior could simply be a bunch of metal plates that unfold to allow space for the womb pod inside. The baby is probably safest inside the battlebot, and if she has good calibration then there shouldn't be problems with her falling and hurting the baby. After giving birth the metal plates could automatically fold back up again, shrinking the womb pod inside so she is combat effective again."

Well /robowaifu/? Feasible?
114 posts and 11 images omitted.
>>10278 I think the general consensus ITT is that a standalone unit will be the safest and most practical. Both for the babies and the robowaifus. Personally, I'm highly skeptical it will even be possible, so my advice is 'don't hold your breath anon'. Maybe they'll prove me wrong, who knows? But you can be certain that as long as things proceed the way they are in the West r/n, it's extremely unlikely you or I would be permitted to freely own an artificial womb unit and produce our own kids. If they ever do come, they will be both outrageously expensive, and outright illegal for anyone outside the globohomo ultra-elite to own them.
Artificial wombs might come faster as we thought. The Israelies are pushing it, they just made progress with the early part of the progress, while the later one was already kind of covered by the Japanese in 2013 and others since then. >In a study published in the journal Nature, Dr. Jacob Hanna described removing embryos from the uteruses of mice at five days of gestation and growing them for six more days in artificial wombs. Also, if you missed it: >A recent development provides another opportunity. Researchers have directly created mouse embryos from mouse fibroblasts — connective tissue cells — making early embryos without starting with a fertilized egg. https://web.archive.org/web/20210512135903/https://www.nytimes.com/2021/03/17/health/mice-artificial-uterus.html https://www.nature.com/articles/s41586-021-03416-3 >The establishment of a system for robustly growing normal mouse embryos ex utero from pre-gastrulation to advanced organogenesis represents a valuable tool for investigating embryogenesis, as it eliminates the uterine barrier and allows researchers to mechanistically interrogate post-implantation morphogenesis and artificial embryogenesis in mammals. (2013, the later stage) https://bioengineer.org/artificial-womb-born/ Via Sandman, including his annoying paranoia and some news on his Sexbot idea (TITS related >>9709) https://youtu.be/jA5GFwyMkRQ
>>10668 **than we thought
>>10668 Thanks for the update Anon. Were the mice subsequently brought to full viability, or were the embryos destroyed (ie, it was simply a test of ex utero growth at all) ?
>>10675 The new method covers the first few days and they were developing and testing that method. They want to use it to understand that part of pregnancy better. They might test the full route to birth at some point.

Open file (14.96 KB 280x280 wfu.jpg)
Beginners guide to AI, ML, DL. Beginner Anon 11/10/2020 (Tue) 07:12:47 No.6560 [Reply] [Last]
I already know we have a thread dedicated to books,videos,tutorials etc. But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. That is why I thought we would need a thread like this. Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? I need a complete pathway directing me to my aim. I've seen that some of you guys recommended books about reinforcement learning and some general books but can I really learn enough by just reading them? AI is a huge field so it's pretty easy to get lost. What I did so far was to buy a non-english great book about AI, philosophycal discussions of it, general algorithms, problem solving techniques, history of it, limitations, gaming theories... But it's not a technical book. Because of that I also bought a few courses on this website called Udemy. They are about either Machine Learning or Deep Learning. I am hoping to learn basic algorithms through those books but because I don't have maths it is sometimes hard to understand the concept. For example even when learning linear regression, it is easy to use a python library but can't understand how it exactly works because of the lack of Calculus I have. Because of that issue I have hard time understanding algorithms. >>5818 >>6550 Can those anons please help me? Which resources should I use in order to be able to produce robowaifus? If possible, you can even create a list of books/courses I need to follow one by one to be able to achieve that aim of mine. If not, I can send you the resources I got and you can help me to put those in an order. I also need some guide about maths as you can tell. Yesterday after deciding and promising myself that I will give whatever it takes to build robowaifus I bought 3 courses about linear alg, calculus, stats but I'm not really good at them. I am waiting for your answers anons, thanks a lot!
49 posts and 93 images omitted.
Open file (18.21 KB 1600x900 IMG_20210514_143007.jpg)
Open file (24.74 KB 1600x900 IMG_20210514_143019.jpg)
Open file (18.13 KB 1600x900 IMG_20210514_143028.jpg)
Open file (47.79 KB 2400x1350 IMG_20210514_165116.jpg)
Open file (79.86 KB 2400x1350 IMG_20210514_165319.jpg)
This could also fit in the math thread, but it's notation used in ML.
>>6560 Heres a wholistic beginners understanding all that you see, the framework that people take for machine learning is to build a mathematical function that can make a prediction. That mathematical function can be created in many ways, at the end of the day its supposed to provide you with a value of some kind that you action, or output. More recently that function is created using deep learning = parametric system that learns how to capture data and create regions between high dimensional datapoints, to segment partitions of that data to do classification or make predictions through regression, obviously there are many other way to this these are the high level constructions. I would suggest you buy grokking deep learning by Andrew trask, he gives you a really good deep insight into DL, In practice however, a lot of the algorithms we use supplement DL techniques we generally use some older ML algorithms and Feature Engineer through PCA or various other Engineering techniques.
Open file (60.57 KB 700x355 p_value.png)
Open file (6.89 KB 446x291 variance.png)
10 Must-Know Statistical Concepts for Data Scientists: https://www.kdnuggets.com/2021/04/10-statistical-concepts-data-scientists.html
>related crosspost (>>10613, >>10614)
>>10607 Thanks very much Anon! Bookmarked.

AI, chatbots, and waifus Robowaifu Technician 09/09/2019 (Mon) 06:16:01 No.22 [Reply] [Last]
What resources are there for decent chatbots? Obviously I doubt there would be anything the Turing Test yet. Especially when it comes to lewd talking. How close do you think we are to getting a real life Cortana? I know a lot of you guys focus on the physical part of robo-waifus, but do any of you have anything to share on the intelligence part of artificial intelligence?
350 posts and 134 images omitted.
>>10613 Thanks Anon, such a survey is always useful for beginners such as myself. Always a good idea to archive a copies of papers for us here on /robowaifu/ too.
Open file (72.22 KB 351x143 download (1).jpeg)
>>10392 Sadly I don't have enough computing power to test my ideas let alone research and explore them. If I save up $5000 for two 24 GB RTX 3090s, it should be sufficient to continue my work. For now my priority is making money: >>11470
I want to bring the discussion on AI back to where we are, instead of getting lost in philosophy and emulating neurons. Here some topics which might be worth looking into: - natural language interfaces e.g. https://github.com/everling/PRECISE - entity linking model: https://youtu.be/8u57WSXVpmw - named entity recognition - triplet extraction - openNLP I started to go through some papers and I'm making notes by using the syntax for plantUML which converts text to a diagram. Sadly some useful programs I found are not available as free software, or in languages I haven't learned yet and haven't been picked up by anyone.
>>11471 I understand anon, I wish I could help you but I can't do anything other than wishing you good luck. I am looking forward to hear more from you. Best of luck.

Open file (2.21 MB 1825x1229 chobit.png)
Robowaifu@home: Together We Are Powerful Robowaifu Technician 03/14/2021 (Sun) 09:30:29 No.8958 [Reply]
The biggest hurdle to making quick progress in AI is the lack of compute to train our own original models, yet there are millions of gamers with GPUs sitting around barely getting used, potentially an order of magnitude more compute than Google and Amazon combined. I've figured out a way though we can connect hundreds of computers together to train AI models by using gradient accumulation. How it works is by doing several training steps and accumulating the loss of each step, then dividing by the amount of accumulation steps taken before the optimizer step. If you have a batch size of 4 and do 256 training steps before an optimizer step, it's like training with a batch size of 1024. The larger the batch size and gradient accumulation steps are, the faster the model converges and the higher final accuracy it achieves. It's the most effective way to use a limited computing budget: https://www.youtube.com/watch?v=YX8LLYdQ-cA These training steps don't need to be calculated by a single computer but can be distributed across a network. A decent amount of bandwidth will be required to send the gradients each optimizer step and the training data. Deep gradient compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, but it's still going to be using about 0.5 MB download and upload to train something like GPT2-medium each optimizer step, or about 4-6 mbps on a Tesla T4. However, we can reduce this bandwidth by doing several training steps before contributing gradients to the server. Taking 25 would reduce it to about 0.2 mbps. Both slow and fast computers can contribute so long as they have the memory to hold the model. A slower computer might only send one training step whereas a fast one might contribute ten to the accumulated gradient. Some research needs to be done if a variable accumulation step size impacts training, but it could be adjusted as people join and leave the network. All that's needed to do this is a VPS. Contributors wanting anonymity can use proxies or TOR, but project owners will need to use VPNs with sufficient bandwidth and dedicated IPs if they wish that much anonymity. The VPS doesn't need an expensive GPU rental either. The fastest computer in the group could be chosen to calculate the optimizer steps. The server would just need to collect the gradients, decompress them, add them together, compress again and send the accumulated gradient to the computer calculating the optimizer step. Or if the optimizing computer has sufficient bandwidth, it could download all the compressed gradients from the server and calculate the accumulated gradient itself. My internet has 200 mbps download so it could potentially handle up to 1000 computers by keeping the bandwidth to 0.2 mbps. Attacks on the network could be mitigated by analyzing the gradients, discarding nonsensical ones and banning clients that send junk, or possibly by using PGP keys to create a pseudo-anonymous web of trust. Libraries for distributed training implementing DGC already exist, although not as advanced as I'm envisioning yet: https://github.com/synxlin/deep-gradient-compression I think this will also be a good way to get more people involved. Most people don't know enough about AI or robotics enough to help but if they can contribute their GPU to someone's robowaifu AI they like and watch her improve each day they will feel good about it and get more involved. At scale though some care will need to be taken that people don't agree to run dangerous code on their computers, either through a library that constructs the models from instructions or something else. And where the gradients are calculated does not matter. They could come from all kinds of hardware, platforms and software like PyTorch, Tensorflow or mlpack.
34 posts and 4 images omitted.
>>10563 What people tend to forget: AMD is much smaller than Nvida. They didn't have the money to do this, thanks to their underdog status in the past. That market might also be less relevant. But then, RocM and underlying technologies like OpenCL are open source, and Intel will release their own discrete GPUs soon, and there might be other players, like Apple for example, or other companies working with Arm based chips and a GPU.
>>10568 > or other companies working with Arm based chips and a GPU. Sadly, certain (((interests))) in the US Govt approved Nvidia's buyout of ARM, and the sale has been completed. Nvidia owns ARM now, lock, stock, and barrel.
Open file (62.11 KB 795x595 1612068100712.jpg)
>>10568 Hmm right, I recall there was a news or blog post what's the difference? I forget, anyway which says that only the bri'ish people buy AMD because it is cheaper and the other people are all buying Nvidia only. >>10577 >Nvidia owns ARM completely now >US Gov' sees no problem with a giant getting even bigger This shit just makes me sad.
>>10591 >This shit just makes me sad. It makes me angry, actually. Just follow the money, Anon, just follow the money.
>>10577 A month or so ago, the talk was that Nvidia buying ARM isn't finished bc Europe and China. Though, I didn't look into it today. ARM also licences it's designs to others, and they certainly won't be allowed to just stop that, if they even want to. I also assume this would only be relevant for future designs, hypothetically. Apple might already be quite independent in designing their own stuff, and there's still Power and Risc-V.

Selecting a Programming Language Robowaifu Technician 09/11/2019 (Wed) 13:07:45 No.128 [Reply] [Last]
What programming language would suit us and our waifus best? For those of us with limited experience programming, it's a daunting question.
Would a language with a rigid structure be best?
Do we want an object-oriented language?
How much do you care about wether or not a given language is commonly used and widespread?
What the fuck does all that terminology mean?
Is LISP just a meme, or will it save us all?

In this thread, we will discuss these questions and more so those of us who aren't already settled into a language can find our way.
50 posts and 12 images omitted.
Open file (566.46 KB 1912x1186 graphit-perf_comparison.png)
I'm probably going to look into this at some time: GraphIt - a domain specified language for graphs. https://graphit-lang.org/ https://youtu.be/ptIVf-YlkhY It outputs C++, but optimizes the code based on search algorithms for graphs. Or something like that, lol.
My two cents being an delivering Waifu Engine. Which the renderer is built with C# using unity, then the AI core is built with Python. With Python learn the basics, then learn about what duck typing is vs every other language type system. Then learn how to work with a team by building a domain language around how you communicate. Worst case scenario you end up with a job, but the best thing about Python is there are many resources if you get stuck. A lot of languages like CPP and Lisp or Haskell there are few resources. I know this because I came from a Haskell background, and used it to parallize data workflows using DAGs. You want the language that will have the lowest barrier of entry, any other language will discourage you as a learner. Theres a lot to learn in programming, though the mental models transfer over to other languages, you just need to learn the details and nuances.
>>10495 Thanks very much Anon, I'll look into it! >>10497 >You want the language that will have the lowest barrier of entry, any other language will discourage you as a learner. Theres a lot to learn in programming, though the mental models transfer over to other languages, you just need to learn the details and nuances. I absolutely agree with this notion for a true beginner. But compiled languages like C & C++ bring a critical robowaifu engineering requirement to the table. Namely, efficiency & performance. Because these two compiled languages output machine code that is actually quite close to mirroring the actual underlying hardware (at least when well-written), they don't unduly introduce artificial energy-consumption and wall-clock hits. And particularly for physical robowaifus where Energy is everything, keeping battery drain low is absolutely vital. C & C++ both handle this like a boss.
Common Lisp would be the best option, however you'll need at least some level of proficiency with it to actually use it effectively, which is why something simpler would be better.
Fast.ai will use Swift for Tensorflow in a few years, and already uses it in some paces under the hood. For now they advice using fast.ai and Pytorch to beginners. https://youtu.be/XHyASP49ses >>10553 I have experience in some Lisp, and will go on using it and learn more dialects. However, it always depends on the job, because of libraries and such.

Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9 [Reply] [Last]
This thread is dedicated to the study, design, and engineering of a cute face for robots.
134 posts and 71 images omitted.
>>9260 Yep that's nice Anon, thanks for the updates.
So, things are moving forward here. New network creates toonified faces out of real or made up ones, also allowes mixing of two input pictures. >Our ReStyle scheme leverages the progress of recent StyleGAN encoders for inverting real images and introduces an iterative refinement mechansim that gradually converges to a more accurate inversion of real images in a self-correcting manner. https://yuval-alaluf.github.io/restyle-encoder/
>>10461 Oh, video: --write-sub --write-description https://youtu.be/9RzCZZBjlxM
>>10463 Thanks very much for taking the extra time to give a more full youtube-dl command to use Anon. Getting and keeping the description and subs will be important to anyone keeping personal archive of YT videos, once cancel-culture Marxism literally deletes anything/everything that could possibly have any bearing whatsoever on either robowaifu creation, or anything else that could possibly help men. Since the Lynxchan software adds an '[Embed]' into the text of the command, I always put such a command here on /robowaifu/ inside codeblocks, since the CSS here disables this embed tag. youtube-dl --write-description --write-auto-sub --sub-lang="en" https://youtu.be/9RzCZZBjlxM
>>10461 >>10463 That's really cool Anon. He's humorous to listen to as well, his enthusiasm is great.

RoboWaifuBanners Robowaifu Technician 09/15/2019 (Sun) 10:29:19 No.252 [Reply] [Last]
This thread is for the sharing of robo waifu banners. As per the rules, fallow these requirements:

>banner requirements:

File size must be lower than 500 KB and dimensions are 300x100 exactly.

Allowed file formats are .jpg, .png, and .gif.
93 posts and 80 images omitted.
Open file (16.49 KB 300x100 robobanner7c.png)
A banner inspired on warhammer 40k.
>>10248 I like that design. Very evocative. AFAICT ADEPTUS is masculine, was this intentional? ADEPTA would be feminine, I think.
Open file (112.30 KB 370x712 lolicron.png)
>>10248 Good idea anon! We need a separate branch to those miserable puritans! If the Mechanicus wishes to outlaw "Abominable Intelligences" as heresy, then they can go worship their corpse emperor without any robowaifus. Pah!
>>10258 >was this intentional? Yes. We are the Adeti.
>>10313 >Adepti Why TH can't I delete my posts?

Modern C++ Group Learning Thread Chobitsu Board owner 08/31/2020 (Mon) 01:00:05 No.4895 [Reply]
In the same spirit as the Embedded Programming Group Learning Thread 001 >>367 , I'd like to start a thread for us all that is dedicated to helping /robowaifu/ get up to speed with the C++ programming language. The modern form of C++ isn't actually all that difficult to get started with, as we'll hopefully demonstrate here. We'll basically stick with the C++17 dialect of the language, since that's very widely available on compilers today. There are a couple of books about the language of interest here, namely Bjarne Stroustrup's Programming -- Principles and Practice Using C++ (Second edition) https://stroustrup.com/programming.html , and A Tour of C++ (Second edition) https://stroustrup.com/tour2.html . The former is a thick textbook intended for college freshmen with no programming experience whatsoever, and the latter is a fairly thin book intended to get experienced developers up to speed quickly with modern C++. We'll use material from both ITT. During the progress, I'll discuss a variety of other topics somewhat related to the language, like compiler optimizations, hardware constraints and behavior, and generally anything I think will be of interest to /robowaifu/. Some of this can be rather technical in nature, so just ask if something isn't quite clear to you. We'll be using an actual small embedded computer to do a lot of the development work on, so some of these other topics will kind of naturally flow from that approach to things. We'll talk in particular about data structures and algorithms from the modern C++ perspective. There are whole families of problems in computer science that the language makes ridiculously simple today to perform effectively at an industrial scale, and we'll touch on a few of these in regards to creating robowaifus that are robust and reliable. >NOTES: -Any meta thoughts or suggestions for the thread I'd suggest posting in our general /meta thread >>3108 , and I'll try to integrate them into this thread if I can do so effectively. -I'll likely (re)edit my posts here where I see places for improvement, etc. In accord with my general approach over the last few months, I'll also make a brief note as to the nature of the edit. -The basic goal is to create a thread that can serve as a general reference to C++ for beginners, and to help flesh out the C++ tutorial section of the RDD >>3001 . So, let's get started /robowaifu/.
43 posts and 58 images omitted.
Open file (65.75 KB 1113x752 juCi++_181.png)
Open file (65.07 KB 1113x752 juCi++_180.png)
Open file (68.47 KB 1113x752 juCi++_182.png)
Open file (67.29 KB 1113x752 juCi++_183.png)
Open file (99.95 KB 1112x760 juCi++_184.png)
>>6922 OK, now that we've wrung our simple Robowaifu class with it's sole member function out a bit, let's lift both it and the sayings std::map out into their own files, and we can begin using them outside our test system. Create two new files in our project 'Robowaifu.hpp' & 'sayings.hpp', (File > New File). Cut & paste the Robowaifu class into it's own file, and the sayings std::map into it's own file. You'll also need to add the appropriate C++ library includes for each file as well. Eg: #include <cstdint> #include <map> #include <string> > #1 #2 Since we've moved the sayings std::map off into it's own file, we'll need to include it into the Robowaifu.hpp file so the class itself can find the sayings data. #include "sayings.hpp" > #3 The test file now just contains the test (as we'd like), but we need to add an #include for the new Robowaifu.hpp file so the test can find the new location of the class itself. #include "Robowaifu.hpp" > #4

Message too long. Click here to view full text.

Open file (55.46 KB 444x614 C.jpg)
Because I am a programming brainlet, I'm trying my hand at some C. The first exercise is a bugger, and my brain was like "Duurrr Fahrenheit to Celsius then Celsius to Fahrenheit then Fahrenus to Celsyheit and Fahrycels to Heityfahr?" However, once they introduce the 'for' statement I began to realise that this stuff has a lot of potential. Sure, it took me four hours to figure out how to code two temperature conversion charts, but once they are done, just changing a couple of numbers lets you convert between any temperature in those units! Also, the whole calc can be reversed by only changing three characters. At first I was like -_- But then I was like O_0
>>10301 Congratulations, Anon. You've opened up a whole new world. You can create entire universes out of your dreams now if you just keep moving forward. Programming with serious intent at the level of C is easily one of the most creative things you can do as a human being IMO. And C++ is unironically even better! This isn't a C class per se but it's pretty likely I can help you out with things if you get stuck. Just give me a shout.
>>10302 Will do OP, thanks! I'd heard that learning C is best before starting on C++, so I'll just keep on plodding. After all, if your waifu speaks Thai or Russian, then it's best to learn those languages. And if your waifu speaks C/C++...

Prototypes and failures Robowaifu Technician 09/18/2019 (Wed) 11:51:30 No.418 [Reply] [Last]
Post your prototypes and failures.

I'll start with a failed mechanism, may we build upon each others ideas.
73 posts and 67 images omitted.
>>10292 Ahh, good detective work.
>>10291 >I worked a bit on the chest, to put in spaces for ribs in another material and was also starting to work on a long term project of building the bones of a hand out of layers, which could then be PCBs (electronics) with sensors, plastics for the form and metal parts for strength. I've thought often about the field of Neuromorphic computing, specifically as it relates to designing/engineering robowaifus. Using structural and other ancillary parts embedded with sensors, batteries, microcontrollers, wiring, electronics parts, etc., right inside the structural and actuator components is not only very bio-mimetic in design essence, it also is very likely to help bring the extreme high-performance characteristics of neuromorphics to the table. For example, embedding temperature sensors directly within the finger bones, and also keeping the robowaifu's self-protection 'sense/react response cycle' to pull away from the heat, say, all 'short-circuited' locally right inside a simplified hand-local electronics/microcontroller/actuator system. This design approach can allow the response times for such a system to be very fast relative to a more traditional, hierarchically-networked command & control mechanisms. Basically, in a somewhat similar way to biologic adrenergic nervous system response mechanisms, you want to push the 'computation' for such a system out to the edges of the physical structure, and not be so dependent with always 'phoning home'' first to the higher-level computation systems of the robowaifu's 'mind'. This latter approach encompasses costly communications and other delays. Not that the signals wouldn't be sent on their way 'back up the chain' though. You definitely want the ability of higher-level control to override lower-level ones when needed. Forging ahead into dangerous environments to protect her master for instance, even when doing so conflicts with the most basic of self-preservation dictums. This round-trip would hopefully be completed within milliseconds (vs. the hopefully microseconds-level desired for pure local response times). My apologies for my probably confusing writing here Anon. This is a complicated topic and it's difficult for me to describe it concisely.
>>10297 As an additional thought on the specific example of HOT! PULL HAND AWAY IMMEDIATELY! example, the control devices could perhaps either open, or reuse, an emergency response communications channel up to further-up actuator systems in the robowaifu's skeletal chain. So for example, the hand-local would attempt to instantly flex fingers back, but then emergency-response channels can be opened to the wrist, elbow, shoulder, and torso actuators, all in a tiered-priority chain, to enable fully pulling the hand entirely away from the danger, same as we ourselves would do accidentally touching a hot iron for instance. Each of these chained-actuators would quickly add their own kinematic dynamic in the movement, and the effect would be propagating and progressive. The idea behind the 'emergency response' is that the higher-level analysis would be bypassed in a first-order response time, simply to quickly save the robowaifu from immediate damage.
>>10298 One additional thing that will need to be solved for this hypothetical situation. As we grow up, our entire physical being develops a kind of physical awareness that let's us intuitively discern where sensations are coming from in our body by mere touch, and usually more or less instantly. Vision and audio, for instance, are not needed to know you've just touched a /comfy/ soft blanket, or a cold ice cube spilled onto the counter. And not only do you recognize immediately these kinds of sensorial cues basically immediately, you also know where (to a first approximation) the touched item of interest is located, relative to your general body position. Again this is all instinctive to us, and happens 'automatically' with little attention needed for most cases to figure these things out. Back to the HOT! emergency response, the robowaifu's system will need some kind of touch location-finder mechanism so she knows instantly where the hot plate is, and which way to yank her hand back out of danger. If this isn't done accurately, she could make a clumsy move in the reaction, and possibly damage herself, you, or something else. Again, this is something we all develop instinctively as we grow up, but for us as designers and engineers we'll have to solve this kind of thing explicitly. I'd guess that a first-approximation approach would be to keep a general sense of all the items in her local body space area's surface normal. This should at the least give her the direction to quickly move out of the contact danger (ie, out along the surface normal of the object and away). This situational-awareness solution needs to account that this 'normal-map' of her environment is dynamic, as both she and the elements in her environment are potentially in motion with respect to each other. This is really quite a remarkable domain to tackle from a systems-engineering perspective. Now that I've been applying myself to consider some of the many things all needed, most other design & engineering endeavors seem rather boring to me now. :^)
>>10299 Also, once you check my digits another thing we might do is develop a sort of 'contact-pad volumetric triangulation' sensor model. The idea is you have many tiny pressure, etc. pads embedded into the robowaifu's 'skin'. Whenever she touched something, and approximation of it's shape (and by implication, it's surface normals) can be quickly simulated in her world model. For example, if 18 different pads on two of her fingertips all register a contact, then based on the kinematic/skeletal/etc body model simulation of her current physical position, then she can 'triangulate' the surface shape of that object at it's contact points with her fingers. Again, all instinctive for us...but for her will need to be explicitly worked out in advance by trial and error during design.

Report/Delete/Moderation Forms
Delete
Report

no cookies?