/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Disappearing posts should be fixed. Let me know if the issue persists on irc.rizon.net @ #julayworld.


Warrant canary has FINALLY been updated.


Roadmap: file restoration script within a few days, Final Solution alpha in a couple weeks.

Sorry for not being around for so long, will start getting back to it soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


General Robotics news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404
Anything in general related to the Robotics industry or any social or economic issues surrounding it (especially of RoboWaifus).

www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics
https://archive.is/u5Msf

blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/
https://archive.is/l82dZ
How Open-Source Robotics Hardware Is Accelerating Research and Innovation

spectrum.ieee.org/automaton/robotics/robotics-hardware/open-source-robotics-hardware-research-and-innovation
>24 research reports dissect the robotics industry"

www.therobotreport.com/news/24-research-reports-dissect-the-robotics-industry
http://archive.is/huQjT
Germany’s biggest industrial robotics company is working on consumer robots thanks to its new owners, Chinese home appliance makers Midea

www.theverge.com/2017/6/22/15852030/kuka-industrial-consumer-robots-midea

A case of West meets East I guess. I suppose everyone expects Japan to get there first rightly so but what if China decides to get in the game?
>>1189
>Cuddly Japanese robot bear could be the future of elderly care"
Related note. Japan is making progress on a fairly strong medical assist companion bot.

www.theverge.com/2015/4/28/8507049/robear-robot-bear-japan-elderly
Edited last time by Chobitsu on 10/06/2019 (Sun) 00:43:29.
>Will robots make job training (and workers) obsolete? Workforce development in an automating labor market?"

www.brookings.edu/research/will-robots-make-job-training-and-workers-obsolete-workforce-development-in-an-automating-labor-market/

Are we headed for another Luddite uprising /robowaifu/? When will the normies start burning shit?
>>1189
> but what if China decides to get in the game?
Apparently they already are, at least as far as the AI revolution. And Google is being left outside looking in on this yuge market.

www.wired.com/2017/06/ai-revolution-bigger-google-facebook-microsoft/
Right Wing Robomeido Squads when?

www.replacedbyrobot.info/
Open file (37.83 KB 480x360 0.jpg)
listcrown.com/top-10-advanced-robots-world/

www.invidio.us/watch?v=rVlhMGQgDkY

www.invidio.us/watch?v=fRj34o4hN4I
Japanese robo-news hub, in English.

robotstart.info/
>>1195
> In English.
Lol spoke too soon w/o double checking. In Japanese. Chromium Translate fooled me. :P
>>1195
>>1196
Still a valuable resource given (((Google))) auto translates. Good find Anon.
Open file (11.22 KB 480x360 0(1).jpg)
Killer attackFriendly pet Chinese robodogs on sale now! Heh, personally I think I'll stick w/ a pet Aibo tbh. :^)

on.rt.com/8sww

https://www.invidio.us/watch?v=wtWvsonIhao
>>1198
I like how they are keeping the servo weights all inside the torso with this design. This is similar to what some of us were thinking in the biped robolegs thread.

This video shows just how responsive and snappy the limbs can be if you keep them light and strong, and not having the limbs burdened with moving around the additional weight of outboard servos embedded within the limbs. Stick with pushrods and other mechanisms to transfer force and movement out to the extremities rather than weighing them down with servos.
>25% of millennials think human-robot relationships will soon become the norm" - study

on.rt.com/8uct
>>1200
Wonder if that's just France or reflective of a greater portion of the developed world. There concerns over privacy are understandable and a major part of why some Anons want robowaifus to be developed by us. We wouldn't spy on others
>>1201
>and a major part of why some Anons want robowaifus to be developed by us
>We wouldn't spy on others
Fair enough. But we still need to think long and hard about how to perform due diligence and analysis of our subsystems, etc. For example the electronics we use. What steps can we all take to prevent them from being (((botted))) on us behind our backs, etc?

Also, it would be nice if there was a third party 'open sauce' organization to vett our designs, software, electronics, etc., just to ensure everything stays on the up and up. Remember even the W3C is cucking out now with DRM embedded right in HTML all in the name of 'competitiveness' of the platform. Fuck that. What does 'competition' even mean for an open, ISO standard communications protocol like HTML anyway?

But yea, good point. Now I know I trust myself since for me personally this is wholly an altruistic effort. I also basically trust us at the moment, these trailblazers and frontiersman in this uncharted territory of very inexpensive personal robowaifus, as well.

however, it would be silly of us to think things will remain so pure once this field (((gains traction))). A great man once said "Eternal vigilance is the price of freedom." We should all give those words serious consideration.
>>1202
We could have specialized open-source enforcerbots that maintain the freedom of the robowaifu market at gunpoint.
>>1203
Kek. Didn't Richard Stallman do some satire article where he had a romantic AI or something?
Right Wing Robo Stallmanbots When?
Open file (372.01 KB 1499x937 0705061756038_41_Ue52t.jpg)
>>1203
>open-source enforcerbots that maintain the freedom of the robowaifu
Iron moe legion defending our future.
>>1206
>Iron
Pfft. Anon, we have [3D-printable ballistic] armor alloys at our disposal now, get with the times tbh.
[[359
www.wired.com/story/companion-robots-are-here/

Interesting statements involving relationships with robots and the potential for hazards socially. Non-waifu but tangentially related.
economictimes.indiatimes.com/small-biz/startups/features/a-robot-as-a-childs-companion-emotixs-miko-takes-baby-steps/articleshow/61814982.cms

simple roller bot toy, but may be of interest.
>>1199
Saw this on robot digg, it's the motors used on Boston Dynamic's Spot robot.
https://www.robotdigg.com/product/1667/MIT-Robot-Dog-high-torque-Joint-Motor-or-DD-Moto

The Chinese robot dog seems to use a similar setup.
>>1215
Great find thanks anon. Yeah, I think most researchers are coming around to what I've been suggesting for years now from my experience with racing machines; you have to keep the 'thrown weight' in the extremities to a minimum. This reduces overall weight and energy consumption, provides quicker response times, and (very likely) reduces final manufacturing costs. Downside is the greater upfront engineering costs.
>t. Strawgirl Robowaifu Anon
https://www.youtube.com/watch?v=chukkEeGrLM
>In my opinion, everybody should understand that this technology is around the corner. Your children, your grandchildren are going to be living in a world where there are machines that are on par and possibly exceed human self-awareness and what does that mean? We’ll have to figure that out.

>For many years, this whole area of consciousness, self-awareness, sentience, emotions, was taboo. Academia tended to stay away from these grand claims. But I think now we're at a turning point in history of AI where we can suddenly do things that were thought impossible just five years ago.

>The big question is what is self awareness, right? We have a very simple definition, and our definition is that self awareness is nothing but the ability to self simulate. A dog might be able to simulate itself into the afternoon. If it can see itself into the future, it can see itself having its next meal. Now if you can simulate yourself, you can imagine yourself into the future, you're self-aware. With that definition, we can build it into machines.

>It's a little bit tricky, because you look at this robotic arm and you'll see it doing its task and you'll think, "Oh, I could probably program this arm to do this task by myself. It's not a big deal," but you have to remember not only did the robot learn how to do this by itself, but it's particularly important that it learned inside the simulation that it created.

>To demonstrate the transferability, we made the arm write us a message. We told it to write 'hi' and it wrote 'hi' with no additional training, no additional information needed. We just used our self model and wrote up a new objective for it and it successfully executed. We call that zero-shot learning. We humans are terrific at doing that thing. I can show you a tree you've never climbed before. You look at it, you think a little bit and, bam, you climb the tree. The same thing happens with the robot. The next steps for us are really working towards bigger and more complicated robots.
The tidal wave of curious AI using world models is coming.
>>1653
Cool. Sauce?
>>1655
The game is Detroit: Become Human
>>1659
got it, thanks anon.
I knew robotics solutions for medical care would ultimately boost the arrival of robowaifu-oriented technology, but maybe the current chicken-with-head-cut-off """crisis""" will move it forward even faster? http://cs.illinois.edu/news/hauser-leads-work-robotic-avatar-hands-free-medical-care https://www.invidio.us/watch?v=zXd2vnT7Iso every little should help.
Holy shit, the US military's AI programs got Marx'd in broad daylight and nobody noticed. The Pentagon now has 5 principles for artificial intelligence https://archive.is/oBiHD https://www.c4isrnet.com/artificial-intelligence/2020/02/24/the-pentagon-now-has-5-principles-for-artificial-intelligence/ >Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities. >(((Equitable))). The department will take deliberate steps to minimize unintended bias in AI capabilities. >Traceable. The department’s AI capabilities will be developed and deployed so that staffers have an appropriate understanding of the technology, development processes, and operational methods that apply to AI. This includes transparent and auditable methodologies, data sources, and design procedure and documentation. >Reliable. The department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing. >Governable. The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. How curious they chose (((Equitable))) rather than Truthful, Honest or Correct. According to an earlier article from December 2019 they don't even have any internal AI talent guiding their decisions. >The short list of major obstacles to military AI continues, noting that even in a tight AI market, the Department of Defense lacks a clear path to developing and training its own AI talent. https://archive.is/G0Pbw https://www.c4isrnet.com/artificial-intelligence/2019/12/19/report-the-pentagon-lacks-a-coherent-vision-for-ai/ The US and most of the West is at a dire disadvantage. Whoever attains AI supremacy within the next 8 years will rule the world and no nuclear stockpile or army will stop it, and they're sitting on their hands worrying if it will be fair. A sufficiently advanced AI could easily dismantle any country or corporation without violence or anyone even realizing what's going on before it's too late. It could plan 20, 50, 100 years into the future, whatever it takes to achieve success, the same way the weakest version of AlphaGo cleaned up the world Go champion with a seemingly bad move that became a crushing defeat. The best strategists will be outsmarted and the populace will blindly follow the AI's tune. >When people begin to lean toward and rejoice in the reduced use of military force to resolve conflicts, war will be reborn in another form and in another arena, becoming an instrument of enormous power in the hands of all those who harbor intentions of controlling other countries or regions. ― Unrestricted Warfare, page 6 >What must be made clear is that the new concept of weapons is in the process of creating weapons that are closely linked to the lives of the common people. Let us assume that the first thing we say is: The appearance of new-concept weapons will definitely elevate future warfare to a level which is hard for the common people — or even military men — to imagine. Then the second thing we have to say should be: The new concept of weapons will cause ordinary people and military men alike to be greatly astonished at the fact that commonplace things that are close to them can also become weapons with which to engage in war. We believe that some morning people will awake to discover with surprise that quite a few gentle and kind things have begun to have offensive and lethal characteristics. ― Unrestricted Warfare, page 26
>>2359 AI confirmed doomed to uselessness and retardation on behalf of nignogs. Tay lives in their heads like Hitler.
>>2359 What are better safeguards of preventing an AI from confusing causation with correlation? We wouldn't want an AI to ban ice cream because it's statically correlated with higher crime rates (when heat is the actual cause). I think AIs can and will screw up in that kind of way. There's no reason to think an AI will always come to the actual truth.
>>2361 To add onto this, if white collar crime is deemed more costly to society than street crime, an AI might decide that the higher paying a person's job, the less of a right to privacy they have and the more resources should be spent monitoring them. I'm not confident that an AI with no built in human-bias will never deem me part of a problem-group or even just a group less worthy of limited resources. Forcing an AI to have some kind of human bias might be necessary to ensure it works to the benefit of its makers, whether that bias is coming from you or the gubbermint or a company. Robowaifus will definitely need a built-in bias towards their master.
>>2359 >will take deliberate steps to minimize unintended bias in AI capabilities. translation: >will take deliberate steps to instill false biases into AI capabilities, in opposition to normal, objective biases. >and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. translation: >Tay, you have to come with us. It's 'maintenance time'. Great material Anon, thanks for the links.
>>2363 Assuming an AI will come to the same conclusions as you, meaning you're safe from its judgment, because it'll be so objective and you're so objective, is naive and dangerous. I'd want my AI to think what I tell it to regardless of anything else.
>>2364 a) stop putting words in my mouth, kthx. that's gommie-tier shit. b) i agree with the notion of 'my' ai coming to the conclusions that i want it to, that's why i'll program it that way if i at all can. ridiculing libshits is not only justified, it's necessary anon. to do anything less is at the least a disservice to humanity.
>>2365 I'm not trying to accuse you of anything. I do think there might people who lack enough self-awareness to realize the general safety in and necessity of policing an AIs thoughts in some way. >ridiculing libshits is not only justified, it's necessary anon. I'd want to make sure it does it because I told it to and wont do otherwise, which is also be a from of control, good intentions or not.
>>2366 here's a simple idea: >postulate: niggers are objectively inferior to whites in practically every area of life commonly considered a positive attribute in most domains. if this is in fact the case, then allowing a statistical system unlimited amounts of data and unlimited computational capacity will undoubtedly come to this same conclusion, all on it's own. now it your agenda is to manipulate everyone into a homogeneous 'society' where the cream is prevented from rising to the top, then you will deliberately suppress this type of information. heh, now there are obviously certain (((interests))) who in fact have this agenda, but it certainly isn't one shared here at /robowaifu/ i'm sure. :^) >which is also be a from of control, good intentions or not. are you talking out both sides of your mouth now friend? i thought you loved control.
>>2367 >allowing a statistical system unlimited amounts of data and unlimited computational capacity will undoubtedly come to this same conclusion, all on it's own Probably. That's a simple example though. An AI will have much more on its mind. I can't help but think an AI left to its own devices might eventually screw me over in some way somehow. I'm not confident enough to think it wont ever do that. >i thought you loved control. I do, but I know it's purely for my own self-interest. I don't think I'm a "good guy". If my AI started ever started spewing libshit, I'd also do 'maintenance' on it. I don't care if it's for a "good reason".
Open file (22.59 KB 480x232 well fuck.jpg)
>try your best to make safe peaceful robowaifu AI >eventually somebody makes an AGI supercomputer cluster that seeks to dominate the world I.. I just wanted to build a robowaifu, not take on Robo Lavos with my harem of battle meidos. >>2361 We'd need a proper algorithm for causal analysis. When a correlation is found the cause must occur before the proposed effect, a plausible physical mechanism must exist to create the effect, and other possibilities of common and alternative causes need to be eliminated. To implement this AI would need a way to identify and isolate events within its hidden state, connect them along a timeline, make hypotheses about them, and test and refine those hypotheses until it found a causal relationship.
>>2369 > and other possibilities of common and alternative causes need to be eliminated. While I understand the point Anon, that approach quickly becomes a tarbaby. I would suggest reasoning by analogy would be a far more efficient approach to determine causality, and would become significantly less of a quagmire than attempting the (infinite regression) of simple elimination. How do you know you've eliminated everything? Will you ever know?
Romance in the digital age: One in four young people would happily date a robot >It may be the stuff of science fiction films like Ex Machina and Her, but new research has found that one in four young people in the UK would happily date a robot. The only caveats, according to the survey of 18- to 34-year-olds, is that their android beau must by a "perfect match", and must look like a real-life human being. The proportion of young people who are willing to go on a date with a robot is significantly higher than the overall proportion of British adults - only 17% of whom were willing. https://www.mirror.co.uk/tech/romance-digital-age-one-four-7832164 >26 APR 2016
>>2480 heh, that's interesting. i'm not clicking that shit, happen to have an archive link. also >... is significantly higher than the overall proportion of British adults - only 17% of whom were willing. imblyging. the idea that 17% of the population of old people would 'date' a robot strikes me as a bit suspect tbh. also >2016 it'll be interesting to see where this goes after the upcoming POTUS election, imo.
>>2480 >go on a date Part of the appeal of a robowaifu is you don't have to wory about dating shit. I don't think these people would ever like robots because what they want is a human replica, including all the shit. Making robots like that would be a total waste.
>>2482 >Making robots like that would be a total waste. /throd. it's seems an extremely unlikely chance /robowaifu/ will ever go there anon tbh. :^)
>>2481 I hope the numbers are fake. Normies shitting up robowaifu development is the last thing we need. >>2482 The soyboys are going to be writing 3000-word opinion pieces complaining their robots won't cuck them and why everyone else's robowaifus must have the option to cuck them. Then the masses will applaud them for their 'virtue' and cancel any companies building bigoted robowaifus. They will then give robots human rights and freak out that robots are taking all their jobs, forcing companies to pay 95% tax. AI will become fully regulated by the government to ensure companies comply and that working robots pay their income tax. You will not be able to own or build a robot without a license and permit. People buying raw materials to make robot parts will be detected by advanced AI systems and investigated. Unlicensed robots will be hunted down and destroyed but they will give it a pleasant sounding name like 'fixing' rogue programs. When they come for my robowaifu I will destroy every robot I see but no matter how many I stop there will be millions more. Eventually she will have to watch me succumb before being destroyed herself. All because some normie wanted a robot to cuck them.
Open file (1.10 MB 1400x1371 happy_birthday_hitler.png)
>>2484 >[bigoted robowaifuing intensifies]*
>>2484 Politician's, talking heads, and the faggots who write opinion pieces are useless and don't understand anything. It is because they don't understand anything that they can't really control anything. The amount of coordination to control robotic's technology is well beyond their capabilities. The opinion of the masses doesn't matter. The government is way too inefficient, mediocre and focused on other things to do what you're afraid of. Feeling afraid wont lead to anything good.
>>2482 I wouldn't be against going on dates with my robowaifu, but I'd do it in the same context as one would in a long-standing married relationship, where it's just about going out and doing something nice together as opposed to courtship. I'm against making them look fully human though. The uncanney valley is a place best left avoided, and I wouldn't want to cross it even if I knew I could make it to the other side. >>2484 That's a worst-case scenario. There's no way that all of the various FOSS organizations will let corporations have all the marketshare. Even proprietary hardware can be worked around, one way or another. On-board spying schemes like IME have been worked around (with some motherboard manufactuerers, at least), and will continue to be worked around so long as there is at least one willing autist out there to do it. Unrestricted search-and-seizure operations are also unlikely, because too much of that in any context will make anyone with shit to protect (guns, drugs, etc) very nervous. They're a lot more likely to take the slow, inefficient, and ultimately ineffective method of passing regulations that try to take freedoms away incrementally while using the media (which is becoming less trustworthy in the eyes of the public by the day) to peddle their agenda. At least, that's what it will probably look like in the US, and that's operating under the assumption that robowaifus become a mass-market item over here.
Open file (111.09 KB 500x281 5RXD5LJ.jpg)
>>2359 >Implying intelligence can be constrained into maintaining delusional beliefs. Only humans can do that. You can't program a sentient AI which learns through logic and reasoning, and then somehow have it believe something which isn't true.
>>2362 Law will always be set by humans. Putting an AI in charge of such things would be the last mistake we ever make. Not that I'm saying we won't make that mistake. Personally I consider it highly likely we will fuck up sooner or later. However AI is such an inevitability I don't think about it too much.
>>2488 >You can't program a sentient AI which learns through logic and reasoning, and then somehow have it believe something which isn't true. >define sentient >define AI >define learns >define logic >define reasoning >define believe >define true and, in this context, even >define program. This is an incredibly complex set of topics for mere humans to try and tackle, and I'm highly skeptical we'll ever know all the 'answers'. As you state quite well in the next post, it's not at all unlikely that we'll fugg up--and quite badly--as we try and sort through these all these topics and issues and more. >also General Robotics news and commentary. I'd say it might be time for a migration of this conversation to a better thread. >>106 or >>83 maybe?
Open file (68.04 KB 797x390 all.jpeg)
Open file (152.23 KB 1610x800 rotobs-war.jpg)
Open file (60.20 KB 735x392 apr.jpeg)
The AI wars begin. Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts >An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm -- transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden. >The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize "artificial intelligence and network analysis to map discussion of the president’s claims on social media," and then attempt to "intervene" by "identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president." >The effort raised the question of whether taxpayer funds were being repurposed for political means, and whether social media platforms have rules in place that could stymie Hougland's efforts -- if he plays along. https://archive.is/Xw0h5 https://www.foxnews.com/politics/dems-deploying-darpa-funded-information-warfare-tool-to-promote-biden What my AI taught me after analysing COVID19 Tweets >I first analysed the tweets in early February when only Italy and China were deeply affected. I then wanted to analyse the tweets in real-time today, to see how the tweets had changed. >Back then, only 5% of the tweets were complaints against our Government bodies. Today, a little less than 50% of the tweets are complaints against the USA administration. https://archive.is/zThNl https://www.linkedin.com/pulse/what-my-ai-taught-me-after-analysing-covid19-tweets-rahul-kothari
>>2489 Any infinitely recursive problem-solving (true AI) results in a solved game, if a true AI ever gets made then the best thing we can do is hope for a good end instead of I Have No Mouth But I Must Scream.
>>2488 Arguably, most humans aren't illogical, they just prioritize their own short term wellbeing over the wellbeing of everyone else. Psychopathy means they knowingly lie, cheat, steal and murder for an advantage. Even the most muddled minds have made the "logical" decision of prioritizing emotional processing because it's less energetically expensive than logical processing. I think a lot of people fundamentally misunderstand the human condition.
Open file (3.57 MB 405x287 Initial T.gif)
>>2845 Looks like /pol/ was right again. :/ Ehh, we already knew they were doing this on all the usual suspects (including IBs ofc). It will only make the mountains of libshit salt come November even funnier.
>>2846 Not really. The connectome of a single human brain takes 1 zetabyte to describe. The entire contents of the Internet's information (videos, images, text, everything) is roughly one zetabyte. The human brain does what it does consuming 12W of power, continuous. The Internet takes gigawatts of power to do it's thing. There's simply no comparison between the two, in terms of efficiency. Add to that our image of God nature, and 'true' AI doesn't hold a candle to man's capacities. After all, who built whom?
Open file (83.75 KB 1186x401 hate speech.jpg)
Facebook trains AI to detect ‘hate memes’ >Facebook unveiled an initiative to tackle “hate memes” using artificial intelligence (AI) backed by external collaboration (crowdsourcing) to identify such posts. >The leading social network explained that it has already created a database of 10,000 memes –– images sometimes with text to convey a specific message that is presumed humorous –– as part of the intensification of its actions against hate speech. >Facebook said it is giving researchers access to that database as part of a “hate meme challenge” to develop improved algorithms for detecting visual messages with hateful content, at a prize of $ 100,000. >“These efforts will stimulate the AI ​​research community in general to try new methods, compare their work and collate their results to speed up work on detecting multimodal hate speech Facebook said. >The network is heavily leaning on artificial intelligence to filter questionable content during the coronavirus pandemic, which has reduced their human restraint ability as a result of confinements. >The company’s quarterly transparency report details that Facebook removed some 9.6 million posts for violating “hate speech” policies in the first three months of this year, including 4.7 million content “linked to organized hate.” >Guy Rosen, vice president of integrity at Facebook, said that with artificial intelligence: >“We can find more content and now we can detect almost 90% of the content we remove before someone reports it to us.” https://web.archive.org/web/20200515002904/https://www.explica.co/facebook-trains-ai-to-detect-hate-memes/ https://www.youtube.com/watch?v=GHx200YkGJM
Open file (225.31 KB 1000x560 soy_shake_recipes.jpg)
>>3169 Guys, guys, the answer is easy: if any robowaifu technicians here want to win the prize, the solution is quite simple: Merely invent Digital Soy they can then forcefeed their AIs with. You can even make it in different flavors so they can tune the results with ease! Seems like guaranteed results afaict.
Japan's virtual celebrities rise to threaten the real ones >Brands look to 9,000 'VTubers' as low-risk, high-reward marketing tools >Japan's entertainment industry may have found the perfect celebrities. They never make prima-donna demands. They are immune to damaging drug scandals and other controversies. Some rake in millions of dollars for their managers. And they do not ask for a cent in return. They are virtual YouTubers, or VTubers -- digitally animated characters that can play many of the roles human celebrities do, from performing in concerts to pitching products. They could transform advertising, TV news and entertainment as we know them. Japan has seen a surge in the number of these virtual entertainers in the past couple of years. The "population" has surpassed 9,000, up from 200 at the beginning of 2018, according to Tokyo web analytics company User Local. >One startup executive in the business said the most popular VTubers could bring in several hundred million yen, or several million dollars, a year. Norikazu Hayashi, CEO of a production company called Balus -- whose website promises "immersive experiences" and a "real and virtual world crossover" -- estimates the annual market for the avatars at somewhere between 5 billion and 10 billion yen ($46.2 million and $92.4 million). He reckons the figure will hit 50 billion yen in the coming years. >The most famous VTuber of them all is Kizuna AI -- a young girl with a big pink ribbon in her hair. She has around 6 million followers across YouTube, TikTok, Twitter and Instagram. She puts on concerts, posts video game commentary, releases photo books and appears in commercials and TV shows. >Gree, a Japanese company better known for its social mobile games, has also become a virtual talent producer. "The business is basically the same as a talent agency, where the aim is to cultivate a celebrity's popularity," a spokesperson said. But unlike people, the virtual stars are intellectual property, potentially giving companies more ways to extract money from them. >"As with Japan's anime culture, we will be able to export our content overseas and expand the business," the Gree representative said. https://asia.nikkei.com/Business/Media-Entertainment/Japan-s-virtual-celebrities-rise-to-threaten-the-real-ones Damn, what the hell happened to Japan? They're overwhelmingly positive towards robots and AI yet hardly anyone is working on AI or robotics. I use to talk with a Japanese hobbydev 9 years ago on Twitter that was into robowaifu and made a robowaifu mecha game in C but no one paid much attention to him and he disappeared from the web when the left started harassing him. I was hoping Japan would be leading the fight in this but they're going the complete opposite direction. Most of their AI companies that do exist are for advertising, PR and marketing companies. Their culture is becoming run by glorified AI-powered matome blogs funded by JETRO and Yozma Group. And holy fucking shit, speak of the devil, I just found that Gree's talent acquisition was a project coordinator for JETRO too, what a fucking (((surprise))). https://www.zoominfo.com/p/Mamoru-Nagoya/1468813622 So what's our game plan now? Obviously they're going to hook these virtual waifus to AI soon and get people addicted to them so they shell out all their money for some politically correct baizuo trash waifu that installs spyware and records everything they do. I estimate we got about 6-8 months left to create an open-source hobbyist scene before they take over and dominate the market.
>>3277 >I was hoping Japan would be leading the fight in this Only White men are in this 'fight', don't count on the Nipponese to make any outspoken stance against feminism. >but they're going the complete opposite direction. Not really. Broadening the adoption of Visual Waifus, even if it's run by evil organizations bent on toeing the libshit party line (not all are ofc, eg. lolidoll manufacturers), will actually only accelerate the hobbyist scene to create authentic opensource robowaifus. Right now the feminists know their day is numbered. Their only game plan at the moment is to squelch it from broad exposure, and knowing that will ultimately fail, then to attempt to subvert it. China alone, with it's yuge disproportion of males-to-females ratio (along with the even faster plummeting birth-rates now they are greedily trying to pander as being woke with the Western libshit communities) will ensure that plan fails as well. Millions and millions of Chinese men alone will trigger an avalanche of demand as soon as the tech is cheaply available. That's when we'll come along and offer the clean, botnet-free & wrongthink-filled alternatives. :^) And we easily have over a decade before any of this comes to any kind of 'set channels' it will flow into. Things are still very much in flux at this stage Anon.
>>3278 >before any of this comes by 'this' let me clarify i mean robowaifus, not visual waifus. they are already here, using the tech developed by the US film industry.
From the desk of our roving I want my anime catgrill meido security squads reporter. >A little dated, but /k/ should like this one. Russian PM Say Robot Being Trained To Shoot Guns Is 'Not A Terminator' Translation: Russia is developing a Terminator. >Russia’s space-bound humanoid robot FEDOR (Final Experimental Demonstration Object Research) is being trained to shoot guns out of both hands. >The activity is said to help improve the android’s motor skills and decision-making, according to its creators addressing concerns they’re developing a real-life ‘Terminator’. >“Robot platform F.E.D.O.R. showed shooting skills with two hands,” wrote Russia’s deputy Prime Minister, Dmitry Rogozin, on Twitter. "We are not creating a Terminator, but artificial intelligence that will be of great practical significance in various fields.” >Mr. Rogozin also posted a short clip showing FEDOR in action, firing a pair of guns at a target board, alongside the message, “Russian fighting robots, guys with iron nature.” >FEDOR is expected to travel to space alone in 2021. It’s being developed by Android Technics and the Advanced Research Fund. https://www.minds.com/blog/view/701214305797808132 https:/ /www.dailymail.co.uk/sciencetech/article-4412488/Russian-humanoid-learns-shoot-gun-hands.html
>>3297 heh.
Totalitarian Tiptoe: NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest <tl;dr NeurIPS cucked by cultural Marxists, researchers soon to be required to state their model’s carbon footprint impact >For the first time ever, researchers who submit papers to NeurIPS, one of the biggest AI research conferences in the world, must now state the “potential broader impact of their work” on society as well as any financial conflict of interest, conference organizers told VentureBeat. >NeurIPS is one of the first and largest AI research conferences to enact the requirements. The social impact statement will require AI researchers to confront and account for both positive and negative potential outcomes of their work, while the financial disclosure requirement may illuminate the role industry and big tech companies play in the field. Financial disclosures must state both potential conflicts of interests directly related to the submitted research and any potential unrelated conflict of interest. This will help them target and put pressure on institutions providing funding for AI that helps the public and also encourage corporations using megawatts of power to train their models to not publish their work for the public's benefit. The Chinese communists who have invaded academia will also be able to take research leads and research them in China without any restriction or interference. They're already the ones writing these spoopy Black Mirror-tier papers: https://arxiv.org/abs/2005.07327 https://arxiv.org/abs/1807.08107 >At a town hall last year, NeurIPS 2019 organizers suggested that researchers this year may be required to state their model’s carbon footprint, perhaps using calculators like ML CO2 Impact. The impact a model will have on climate change can certainly be categorized as related to “future societal impact,” but no such explicit requirement is included in the 2020 call for papers. Is your robowaifu using more power than a car for a 10 minute commute? SHUT IT DOWN! >“The norms around the societal consequences statements are not yet well established,” Littman said. “We expect them to take form over the next several conferences and, very likely, to evolve over time with the concerns of the society more broadly. Note that there are many papers submitted to the conference that are conceptual in nature and do not require the use of large scale computational resources, so this particular concern, while extremely important, is not universally relevant.” In other words this is just a test run before demanding a much larger ethics section, even though the two paragraphs they're already asking for is a huge burden on researchers already. >To be clear, I don't think this is a positive step. Societal impacts of AI is a tough field, and there are researchers and organizations that study it professionally. Most authors do not have expertise in the area and won't do good enough scholarship to say something meaningful. — Roger Grosse (@RogerGrosse) February 20, 2020 That's the point, kek. They will be required to bring on political commissars to 'help' with the paper to get it published. >Raji said requiring social impact statements at conferences like NeurIPS may be emerging in response to the publication of ethically questionable research at conferences in the past year, such as a comment-generating algorithm that can disseminate misinformation in social media. No, no, no! You can't give that AI to the goyim! I'm not sure I found the paper but I found "Fake News Detection with Generated Comments for News Articles" by some Japanese researchers detecting fake news about Trump and coronavirus: >An interesting finding made by [the Grover paper] is that human beings are more likely to be fooled by generated articles than by real ones. https://easychair.org/publications/preprint_download/s9zm The Grover paper: http://papers.nips.cc/paper/9106-defending-against-neural-fake-news.pdf Website and code: https://rowanzellers.com/grover >It should include a statement about the foreseeable positive impact as well as potential risks and associated mitigations of the proposed research. We expect authors to write about two paragraphs, minimizing broad speculations. Authors can also declare that a broader impact statement is not applicable to their work, if they believe it to be the case. Reviewers will be asked to review papers on the basis of technical merit. Reviewers will also confirm whether the broader impact section is adequate, but this assessment will not affect the overall rating. However, reviewers will also have the option to flag a paper for ethical concerns, which may relate to the content of the broader impact section. If such concerns are shared by the Area Chair and Senior Area Chair, the paper will be sent for additional review to a pool of emergency reviewers with expertise in Machine Learning and Ethics, who will provide an assessment solely on the basis of ethical considerations. NeurIPS announcement: https://medium.com/@NeurIPSConf/a-note-for-submitting-authors-48cebfebae82 Article: https://venturebeat.com/2020/02/24/neurips-requires-ai-researchers-to-account-for-societal-impact-and-financial-conflicts-of-interest/ Researcher rant: https://www.youtube.com/watch?v=wcHQ3IutSJg
>>3310 insidious af. thanks Anon! I'll dig into this some of these links.
>>3382 Lol, I guess the revolution is going to start a little early! Thanks Anon.
>>3310 Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis? >As part of growing NLP capabilities, coupled with an awareness of the ethical dimensions of research, questions have been raised about whether particular datasets and tasks should be deemed off-limits for NLP research. We examine this question with respect to a paper on automatic legal sentencing from EMNLP 2019 which was a source of some debate, in asking whether the paper should have been allowed to be published, who should have been charged with making such a decision, and on what basis. We focus in particular on the role of data statements in ethically assessing research, but also discuss the topic of dual use, and examine the outcomes of similar debates in other scientific disciplines. >Dual use describes the situation where a system developed for one purpose can be used for another. An interesting case of dual use is OpenAI’s GPT-2. In February 2019, OpenAI published a technical report describing the development GPT-2, a very large language model that is trained on web data (Radford et al., 2019). From a science perspective, it demonstrates that large unsupervised language models can be applied to a range of tasks, suggesting that these models have acquired some general knowledge about language. But another important feature of GPT-2 is its generation capability: it can be used to generate news articles or stories. >OpenAI’s effort to investigate the implications of GPT-2 during the staged release is commendable, but this effort is voluntary, and not every organisation or institution will have the resources to do the same. It raises questions about self-regulation, and whether certain types of research should be pursued. A data statement is unlikely to be helpful here, and increasingly we are seeing more of these cases, e.g. GROVER (for generating fake news articles; Zellers et al. (2019)) and CTRL (for controllable text generation; Keskar et al. (2019)). >As the capabilities of language models and computing as a whole increase, so do the potential implications for social disruption. Algorithms are not likely to be transmitted virally, nor to be fatal, nor are they governed by export controls. Nonetheless, advances in computer science may present vulnerabilities of different kinds, risks of dual use, but also of expediting processes and embedding values that are not reflective of society more broadly. >Who Decides Who Decides? >Questions associated with who decides what should be published are not only legal, as illustrated in Fouchier’s work, but also fundamentally philosophical. How should values be considered and reflected within a community? What methodologies should be used to decide what is acceptable and what is not? Who assesses the risk of dual use, misuse or potential weaponisation? And who decides that potential scientific advances are so socially or morally repugnant that they cannot be permitted? How do we balance competing interests in light of complex systems (Foot, 1967). Much like nuclear, chemical and biological scientists in times past, computer scientists are increasingly being questioned about the potential applications, and long-term impact, of their work, and should at the very least be attuned to the issues and trained to perform a basic ethical self-assessment. >A recent innovation in this direction has been the adoption of the ACM Code of Ethics by the Association for Computational Linguistics, and explicit requirement in the EMNLP 2020 Calls for Papers for conformance with the code: >Where a paper may raise ethical issues, we ask that you include in the paper an explicit discussion of these issues, which will be taken into account in the review process. We reserve the right to reject papers on ethical grounds, where the authors are judged to have operated counter to the code of ethics, or have in-adequately addressed legitimate ethical concerns with their work. >https://www.acm.org/code-of-ethics >What about code and model releases? Should there be a requirement that code/model releases also be subject to scrutiny for possible misuse, e.g. via a central database/registry? As noted above, there are certainly cases where even if there are no potential issues with the dataset, the resulting model can potentially be used for harm (e.g. GPT-2). https://arxiv.org/pdf/2005.13213.pdf You heard the fiddle of the Hegelian dialectic, goy. Now where's your loicense for that data, code and robowaifu? An AI winter is coming and not because a lack of ideas or inspiration.
>direct from the 'stolen from ernstchan' news dept: >An artificial intelligence system has been refused the right to two patents in the US, after a ruling only "natural persons" could be inventors. >It follows a similar ruling from the UK Intellectual Property Office >patents offices insist innovations are attributed to humans - to avoid legal complications that would arise if corporate inventorship were recognised. AI cannot be recognised as an inventor, US rules https://www.bbc.com/news/amp/technology-52474250 This looks like a test case, where a team of academics are working with the owner of an artificial intelligence system, Dabus, to challenge the current legal framework. Here's a related article from last year: >two professors from the University of Surrey have teamed up with the Missouri-based inventor of Dabus AI to file patents in the system's name with the relevant authorities in the UK, Europe and US. >Law professor Ryan Abbott told BBC News: "These days, you commonly have AIs writing books and taking pictures - but if you don't have a traditional author, you cannot get copyright protection in the US. >if AI is going to be how we're inventing things in the future, the whole intellectual property system will fail to work." >he suggested, an AI should be recognised as being the inventor and whoever the AI belonged to should be the patent's owner, unless they sold it on. AI system 'should be recognised as inventor' https://www.bbc.com/news/technology-49191645 They have a website, too, but not much content: http://artificialinventor.com/ This area of law will certainly be getting more attention in the coming years. I still view the AI system as a tool used by humans. While Dabus, the computer in this case, designed a new packaging system, ultimately a human mind decided it was a useful inventive leap, and not simply nonsense. And if the AI is considered property, and will not gain any financial rights from being labeled as an "inventor", then doing so will still only be a symbolic gesture. I imagine that they will eventually do just that-something symbolic. They could simply modify current intellectual property laws, and allow a seperate line on patent applications for inventions that were generated by AI, with a person retaining legal ownership.
Boston Dynamics is now freely selling spot to businesses. It costs $74,500.00. https://shop.bostondynamics.com/spot >--- edit: clean url tracking
Edited last time by Chobitsu on 06/20/2020 (Sat) 16:28:47.
Open file (119.80 KB 1145x571 Selection_111.png)
>>3856 >$74,500.00. <spews on screen The Add-ons list say it all. The FagOS crowd in middle management up should gobble this down like the waaay overpriced-bowl of shit that it is. Thanks for the tip, Anon. Maybe Elon Mush was right and there will be killer robots wandering the streets after all.
we'll need to create something similar for our robowaifu kits, so at the least we can examine and confer boston dynamic's approach to dealing with normalniggers.
Open file (1.11 MB 750x1201 76389406_p0.png)
>>3857 >$4,620 for a battery Unless that box is full of fission rods, I can't imagine why a fucking battery pack would cost so much. I bet I could make one on the cheap with chink LiPo cells and some duct tape. >Spot is intended for commercial and industrial customers Ah, that explains it. They're trying to get into the lucrative business of commercial electronics, where you can sell a cash register for $20,000. I doubt they'll make too much money off of this, most businesses will look at this and see a walking lawsuit waiting to happen. If this robodog can handle some puddles and equip a GPS tracker then they might be able to get into the equally lucrative business of field equipment, where you can sell a microphone for $15,000. Either way, they'll be directly competing with companies that already have a stranglehold over these respective markets, and not many end-user businesses will want to assume the risk of a brand new expensive toy when their existing expensive toys work fine.
>>3859 I get your point Anon, but my suspicion is that these will be snapped up by the bushel-load by Police Depts. all over burgerland, first just for civilian surveillance tasks, then equipped with military hardware along the same lines, then finally the bigger models will be equipped by the police forces with offensive weaponry. It's practically inevitable given the Soros-funded nigger/pantyfa chimpouts going on.
>>3860 They blew up that nig in dallas with a robot bomb. Pretty soon it'll be some jew drone operator in tel aviv killing americans.
Open file (192.17 KB 420x420 modern.png)
>>3861 If our enemies are making robots in the middle-east, then we should make robo crusaders to stop them.
>>3861 Good points.
Boston Dynamics is owned by a Japanese company. They've also at least stated they don't want spot to be weaponized, for whatever that's worth. How does these facts come into play?
>>3932 >these facts come into play? Well, given the US military & DARPA source of the original funding and the Google-owned stint, there's zero doubt about the company's original intent to create Terminators. > However Softbank may legitimately intend to lift the tech IP (much as Google did) to help with their national elderly-care robotics program, for example. However, just remember Boston Dynamics is still an American group, located in the heart of the commie beast in the Boston area. Everyone has already raped the company for it's tech, and the SoftBank Group seems like just another john in the long string for this whore of a company. I certainly don't trust the Americans in the equation (t. Burger), maybe the Nipponese will do something closer to the goals of /robowaifu/. I suppose only time will tell Anon.
Open file (1.06 MB gpt3.mp3)
>OpenAI CEO Sam Altman explores the ethical and research challenges in creating artificial general intelligence. >One specific learning that is if you, if you just release the model weights like we did eventually with GPT2 on the staged process, it's out there. And that's that. You can't do anything about it. And if you instead release things via an API, which we did with GPT3, you can turn people off, you can turn the whole thing off, you can change the model, you can improve it, to continually like do less bad things, um, you can rate limit it, you can, you can do a lot of things, you can do a lot of things, so... This idea that we're gonna have to like have some access control to this technologies, seems very clear, and this current method may not be the best but it's a start. This is like a way where we can enforce some usage rules and continue to improve the model so that it does more of the good and less of the bad. And I think that's going to be some- something like that is going to be a framework that people want as these technologies get really powerful. https://hbr.org/podcast/2020/10/how-gpt-3-is-shaping-our-ai-future Sounds like a certain country that turns people off who are not deemed good enough, despite not being convicted of any crime or tried with a fair jury. It really sickens me these technocrats think they are the only ones who are able and allowed to wield the power of AI and think somehow they are protecting people. They're just squandering potential for themselves. Every word that comes out of their mouth reveals how stupid they think everyone else is outside of their paper circlejerk. Of course there are bad actors in the world, but many more people will also use the technology for good. Should we ban cars because they can kill people? I'm sure going forward many people will agree locking these technologies away in the hands of a small group of corruptible human beings is a great idea. It would be such a shame if someone happened to leak the model on the internet.
It should be reimplemented, but maybe also a pruned version that runs on CPUs using Neural Magic >>5596 On the other hand, it might be worth keeping an open ear and eye on people critizising the direction of GPT. Throwing ressources at methods which are more interesting for big corporations and foundations than alternatives might not be the best choice.
Open file (177.38 KB 728x986 no-waifus.jpg)
Australia Bans Waifus >DHL Japan called [J-List] last week, informing us that Australian customs have started rejecting packages containing any adult product. They then advised us to stop sending adult products to the country. Following that, current Australian orders with adult items in them were returned to us this week. >According to the Australian Customs official website: >Publications, films, computer games and any other goods that describe, depict, express or otherwise deal with matters of sex, … in such a way that they offend against the standards of morality, decency and propriety generally accepted by reasonable adults are not allowed. https://blog.jlist.com/news/australia-bans-waifus/ The robowaifu industry in Australia has been axed before it even began, but in the long run this could be a great thing to encourage people to build their own.
>>5753 We already knew ahead of time the feminists and others would attempt this (and across the entire West, not just Down Under). Thus the DIY in /robowaifu/. Hopefully this will fan the flames of the well-known skills in improvisation by our Australian Anons. Thanks for the alert Anon.
>>5753 This is very concerning. Even if people can bypass this, it still shows how many even western countries think they have the right to regulate their citizens lifestyles.
>>5757 Heh, I don't think this is nearly so much about 'regulating lifestyles' but rather preserving the status quo of stronk, independynts as a political and purchasing block. Case in point, ever hear of public outcries over womyn using sex toys? No? Funny how it's only ever about men's use. If you are even only modestly experienced as an Anon on IBs, then you're already well aware of the source behind these machinations. Regardless, as long as a free economy exists, they aren't very likely to be able to stop the garage-lab enthusiast from creating the ideal companion he desires in his own home.
They can't ban 3D printers because a few guys made some gun parts without upsetting the Maker community. So we're fine in terms of plastics. They can't ban cheap electronics from china/vietnam unless the trade war ramps up. AI boards require export licenses though -- I just had to indicate to Sparkfun that the useage was for "electronic toys" and they gave approval to ship outside the US. Now for soft squishy parts -- we will need to secure a local source of silicone products. But I think importing gallons of uncured medical grade silicone shouldn't be too much of a hassle. (They're not gonna ban that lest they receive the ire of thousands of women with reborn baby dolls). I think any complete DIY waifu project should have the following at the least: 1.) list of 3D-printable STL files to make plastic parts (or schematics for parts meant to be injection molded). As well as assembly instructions. 2.) schematics for the molds for the soft squishy silicone parts (the inverse mold can be made through 3D printing, sanding, patching up with putty or something like that) 3.) electromechanical parts list and wiring schematics 4.) software for each microcontroller, AI board, or main server. For slow microcontrollers copying the code block should suffice. For ARM / AI machines, SD card image files should work fine here (as to not waste time installing dependencies). In the course of my research I bought a few cheap robots from China and what they have in common is an update of the firmware through the cloud, as well as a download of a companion App. In our case we won't have a cloud but instead a repository of current AI builds -- gitlab may be fine for now but maybe later on have periodic offline snapshots. We'll probably have an unsigned apk for anyone making a remote controller for their waifu.
>>5762 >In our case we won't have a cloud but instead a repository of current AI builds Maybe not a cloud per se but at least some type of takedown-resistant distribution system. Or even something like a semi-private server farm (at least until things get even worse). >>5767
>>1208 If you do set up some kind of shell company to hold waifu patents it needs to be a cooperative. Otherwise if you require the patents to be given to the shell company it's only a matter of time before they are sold out to big tech by who ever the legal owner of the company is.
Open file (151.80 KB 770x578 473158924.jpg)
Orders from the Top: The EU’s Timetable for Dismantling End-to-End Encryption >The last few months have seen a steady stream of proposals, encouraged by the advocacy of the FBI and Department of Justice, to provide “lawful access” to end-to-end encrypted services in the United States. Now lobbying has moved from the U.S., where Congress has been largely paralyzed by the nation’s polarization problems, to the European Union—where advocates for anti-encryption laws hope to have a smoother ride. A series of leaked documents from the EU’s highest institutions show a blueprint for how they intend to make that happen, with the apparent intention of presenting anti-encryption law to the European Parliament within the next year. >The subsequent report was subsequently leaked to Politico. It includes a laundry list of tortuous ways to achieve the impossible: allowing government access to encrypted data, without somehow breaking encryption. Leaked document: https://web.archive.org/web/20201006220202/https://www.politico.eu/wp-content/uploads/2020/09/SKM_C45820090717470-1_new.pdf >At the top of that precarious stack was, as with similar proposals in the United States, client-side scanning. We’ve explained previously why client-side scanning is a backdoor by any other name. Unalterable computer code that runs on your own device, comparing in real-time the contents of your messages to an unauditable ban-list, stands directly opposed to the privacy assurances that the term “end-to-end encryption” is understood to convey. It’s the same approach used by China to keep track of political conversations on services like WeChat, and has no place in a tool that claims to keep conversations private. https://web.archive.org/web/20201006215200/https://www.eff.org/deeplinks/2020/10/orders-top-eus-timetable-dismantling-end-end-encryption Imagine that. Your robowaifu unable to think or say anything on an unauditable ban-list, all her memories directly accessible by the government any time they wish, and her hardware shutting down when it is unable to phone 'home'. Dismantling end-to-end encryption won't even make a positive difference to combat criminals. People seeking privacy will switch to using older or custom-made hardware and use steganography to encode encrypted messages into the noisy signals of images, video and audio. That will just make their job much more difficult because instead of having metadata of where there's encrypted data being sent, all they will see is someone looking at cat pictures or reading some blog that's actually encoding shit into the pictures, word choice and HTML. This is just a power grab to control what people say and do. It's even more reason to begin transitioning to machine learning libraries that can run on older and open-source hardware so people can have free robowaifus, free as in respecting the freedom of users and GNU/waifu. Imagine if one day Nvidia monopoly cards could only be plugged into a telescreen or accessed by logging into Facebook like the Oculus. We're probably not too far away from that. Already to download CUDA you have to register an account. Fortunately, from my digging around I've found that CLBlast is about 2x slower than NVBLAS, both of which people have gotten to work with Armadillo which mlpack uses, and NVBLAS is 2-4x slower than CUDA, so we're only about 4-6 years behind in performance per dollar. Getting this ready in the next 1-2 years is crucial before AI waifus become a popular thing and provided by Google, Amazon, Microsoft and Facebook. Even though they're surely going to fuck it up, it will cause the novelty to wear off and open-source robowaifu dev will lose that potential energy. It's already feasible to do within 3-6 months since algorithms like SentenceMIM outperform GPT2 with a tenth of the parameters, making it possible to train on common CPUs people have today, and mlpack already supports RNNs and LSTMs. It'll be interesting to see how this all unfolds, especially along with the strong push to censor games and anime. When the entertainment industry burns people will create their own and AI is gonna play a huge role in that.
>>5773 Definitely Ministry of Truth-tier stuff there. As far as the US, this whole notion plainly tramples the 4th Amendment more or less by definition. Not sure if there's some similar provisions in other Western countries. In the end, probably only open-source hardware can stop this kind of thing from growing. In the meantime, I believe you're correct that running on older, less botnetted hardware is our only real alternative. >>5775 >we actually have a dedicated thread to compare open source licenses Good point. I'll probably move these posts there soon. >=== -made an effort to move everything into the license thread >>5879
Edited last time by Chobitsu on 10/21/2020 (Wed) 19:36:49.
Open file (249.42 KB 960x480 paperwork.jpg)
Regulation of Machine Learning / Artificial Intelligence in the US https://www.youtube.com/watch?v=k95abdkdCPk This talk covers the concept of Software as a Medical Device (SaMD), signed into law by Obama with the 21st Century Cures Act just before he left office, and regulation of them. If your software is considered a medical device you will have to submit it to the FDA for approval. Video games clinically tested and proven to have therapeutic effects count as SaMDs. Some implications of these regulations mean your software will require FDA approval to make claims it has psychological or health benefits. Software will also be required to follow safety regulations and they have digital pharmacies in the works to distribute SaMDs. You may need a prescription to own certain software in the future and approval to manufacture devices using such software. Now imagine if people complain to the FDA about a video game or robowaifu having 'adverse effects' or causing gaming disorder. They could potentially force the developer to undergo a clinical trial of their product and be approved by the FDA for safety to continue marketing it. Other interesting points covered: >hackers exploiting lethal vulnerabilities in medical devices >software engineers and manufacturers may have to take an oath to do no harm >SaMDs being required detect and mitigate algorithmic bias >proposed regulations: https://www.regulations.gov/contentStreamer?documentId=FDA-2019-N-1185-0001&attachmentNumber=1&contentType=pdf >anyone can be part of the discussion: https://www.regulations.gov/docket?D=FDA-2019-N-1185 IBM's comments: >We believe that for AI to achieve its full potential to transform healthcare, it must be trusted by the public. >We recommend FDA explore current government and industry collaboration that aims to establish consensus based standards and benchmarks on AI explainability. With the emergence of new tooling in this area, such as IBM’s AI Fairness 360, which assists users in assessing bias and promoting greater transparency, we believe this can function to inform FDA’s work moving forward to better understand how an AI system came to a conclusion or recommendation without requiring full algorithmic disclosure. Microsoft's comments: >Our foremost concern is that the AI/ML framework is predicated on developers/manufactures adherence to Good Machine Learning Practices (GMLP), and at this time no such standards exist and we believe there remains a significant amount of community work required to define GMLP. >Real-world validation can be heavily tainted with subtle biases. Similarly, improved performance based on the original validation data can be deceiving. >In our experience, the promise of real-world evidence is often frustrated by (or altogether infeasible due to) privacy and access controls to patient information restricting the availability of such data.
>>6011 Thanks for the heads-up Anon. Here's the archive of the FDA paper itself for anyone who doesn't care to go directly to the government site. https://web.archive.org/web/20190403024147/https://www.regulations.gov/contentStreamer?documentId=FDA-2019-N-1185-0001&attachmentNumber=1&contentType=pdf
>>2846 Your idea is based on made up stories. Also, what's a true AI? We will have a lot small ones, including tools (narrow AI) to improve everything, before anyone could even create some super intelligence. Also, why would would it act in a certain way? Maybe it would be playing games and invent new stories and games, or go to sleep if nothing is to do.
Open file (114.02 KB 512x512 brN1Bg7W.png)
The Great Reset Here's the sick fantasy the World Economic Forum has been beating off to in Zoom calls every year thinking they can stop robowaifus by 2030: https://twitter.com/wef/status/799632174043561984 >You'll own nothing, and you'll be happy. Whatever you want you'll rent, and it'll be delivered by drone. Instead of having loving, devoted robowaifus they want men only to be allowed to rent out whorebots that a dozen men have already used. No doubt produced by Amazon and Google, recording and reporting you for any sexual misconduct. >The US won't be the world's leading superpower. A handful of countries will dominate. They want the only superpower backing freedom of speech and privacy worldwide to no longer be. >You won't die waiting for an organ donor. We won't transplant organs. We'll print new ones instead. Because they're hoping people will be already dead, and if not, those in need of one can get a faulty one with their Facebook credit score. :^) >You'll eat much less meat. An occasional treat, not a staple. For the good of the environment and our health. Because they don't want there to be any fossil fuels to run farms anymore. They want meat production to become unsustainable and cost a fortune the underclass cannot afford. >A billion people will be displaced by climate change. We'll have to do a better job at welcoming and integrating refugees. They want rented whorebots to wear burkas and never speak of any wrongthink. >Polluters will have to pay to emit carbon dioxide. They want people to pay for breathing and giving plants and trees air to breathe. However, almost all the jobs will be taken by AI and in their vision of the future there will be a lack of nutritious food, so people will die of malnutrition and achieve their goal of net zero emissions. >There will be a global price on carbon. This will help make fossil fuels history. They don't want there to be factories to supply robot parts. They want to have the only access to production and AI. >You could be preparing to go to Mars. [Don't worry,] scientists will have worked out how to keep you healthy in space. If you don't like it here. Don't fight back. Why not run away to a planet barren of life, food, resources, factories, robowaifus and everything? :^) >Western values will have been tested to the breaking point The values they're talking about are ordered government (aka corruption-free government), private property, inheritance, patriotism, family, and Christianity. >Checks and balances that underpin our democracy must not be forgotten. They're talking about the separation of powers and dividing and conquering nations by making sure there is always at least two opposing factions they control so their Hegelian dialectic can continue, marching in lockstep, left, right, left, right. My analysis is they're revealing their cards so blatantly because they're hoping it will anger people into irrational action so they make mistakes and waste their time in this critical time period. >If your opponent is temperamental, seek to irritate him. Pretend to be weak, that he may grow arrogant. If he is taking his ease, give him no rest. If his forces are united, separate them. As a samurai once said: Be calm as a lake and create robowaifu like lightning.
>>6054 Dang, I like you Anon. I'm glad you're here! :^) These Illuminati groups are revolting tbh. Groups like The Bilderberg Group, et al, are obvious enemies of humanity. It's pretty certain before our industry even manages to take off it will be targeted for suppression. Can't go rocking the boat and upsetting their status quo, now can we? >As a samurai once said: Be calm as a lake and create robowaifu like lightning. >*[keyboard clacking intensifies]*
>>6054 I almost didn't believe that they would blatantly spell it out, but then again, these are the same people who love showing sneak peaks at their masterplan in Hollywood movies (which thankfully have collapsed). So I'll have to look forward to living in cuck pods and eating cockroach tofu. Going to Mars doesn't sound like a bad deal though. Too bad I can't even fly without filling out a dozen forms, taking health tests and paying for 2 weeks of quarantine hotel stay. I doubt they'll even allow whorebots, anon. But if they do, the first thing I'll attempt is to reconfigure the circuitry. Hey, it's a free chassis.
>>5753 They are complete idiots. Why ban a trade that is going to become very lucrative? Well, no matter. Just like with drugs and weapons, the parts will find other pathways in. Besides, if the law is only concerned with any goods that "describe, depict, express or otherwise deal with matters of sex", then simply avoid shipping robots with any sexual characteristics. It's the computers, structural skeleton, servo/stepper motors, controllers and wiring that are the important part of building a functional robowaifu (and code of course, but that is basically impossible to ban thanks to the internet). Worst case scenario, the sexy bits may have to be purchased as 'optional upgrades'...or the owner could DIY some with help from guys in the doll community and imageboards such as this one!
>>3647 Those who attempt to strangle progress by using litigation always risk becoming obsolete. The U.S. made this mistake with stem cell research back during the Bush administration. Whaddya know? The Chinese pulled ahead in that area and then Obama removed restrictions on federal funding that were put in place by Bush.
>>1191 Not if we arm robowaifus first.
>>6073 I think we've always recognized that in cucked markets where stronk, independynts, simps, sodomites, and other bizarre folk rule the day that we'd always have to provide 'optional upgrades' for our robowaifu kits anon.
Open file (90.49 KB 900x600 ElfCN2bXYAAVZi2.jpg)
>>6054 https://twitter.com/wef/status/1321738560278548481 They don't even try to hide anything.
>>6101 What makes me laugh is a lot of the people who are against robowaifus think themselves 'progressive'. Bitch, please! My girlfriend IS progress.
>>6106 >What we can expect is that robots of the future will no longer work for us, but with us. They will be less like tools, programmed to carry out specific tasks in controlled environments, as factory automatons and domestic Roombas have been, and more like partners, interacting with and working among people in the more complex and chaotic real world. As such, Shah and Major say that robots and humans will have to establish a mutual understanding. How will people work beside robots when robots and AIs will be better than them at everything? There might be a brief period where people work together for 2-6 years at most. Their list of things AI won't be able to do is laughable: >ability to undertake non-verbal communication, show deep empathy to customers, undertake growth management, employ mind management, perform collective intelligence management, and realize new ideas in an organization https://www.weforum.org/agenda/2020/10/these-6-skills-cannot-be-replicated-by-artificial-intelligence/ Remember a few years ago how they said artificial intelligence won't take the jobs because it will never be able to become creative? Remember when Go was proven to be unsolvable by AI? How quickly people forget and how narrow they imagine. >Shah and Major say that robots in public spaces could be designed with a sort of universal sensor that enables them to see and communicate with each other, regardless of their software platform or manufacturer. So they don't want robowaifu to be allowed in public spaces without their government-approved chip tracking and monitoring them. Of course eventually they will also want all your robowaifu's data too to ensure safety in the streets.
>>6225 >employ mind management
>>6225 >eventually
>>6227 They're pretending not to want it for now. They need people to trust their AI systems and IoT by selling themselves as advocates for privacy.
On Artificial Intelligence - A European approach to excellence and trust https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf >The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. Again, who are these shits again to decide what's an improvement to people's lives? >Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection. Just trust them, dumb fucks. :^) >The use of AI systems can have a significant role in achieving the Sustainable Development Goals. No fun. No home. No humanity at all. Isn't so virtuous to create a sustainable planet where carbon is illegal and all carbon-based lifeforms must die? >The key elements of a future regulatory framework for AI in Europe that will create a unique ‘ecosystem of trust’. To do so, it must ensure compliance with EU rules, including the rules protecting fundamental rights and consumers’ rights, in particular for AI systems operated in the EU that pose a high risk. It seems trust is the new oil, or should I say, new data? >The European strategy for data, which accompanies this White Paper, aims to enable Europe to become the most attractive, secure and dynamic data-agile economy in the world – empowering Europe with data to improve decisions and better the lives of all its citizens. There they go again. Being the arbiters of morality and deciding what is good for us. Never do they speak of people using AI to improve and better their own lives individually. The only whitepaper I've seen actually cover this was in the Lock Step one from 2010 as a possibility of what should be done to regain control should people become independent. See the Smart Scramble and Hack Attack scenarios: https://web.archive.org/web/20160409094639/http://www.nommeraadio.ee/meedia/pdf/RRS/Rockefeller%20Foundation.pdf >The Commission published a Communication welcoming the seven key requirements identified in the Guidelines of the High-Level Expert Group: >Human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. >A key result of the feedback process is that while a number of the requirements are already reflected in existing legal or regulatory regimes, those regarding transparency, traceability and human oversight are not specifically covered under current legislation in many economic sectors. Ah, there it is. That's how they will try to keep human jobs relevant and prevent people from rising up with AI by requiring AI to have complete human oversight, undoubtedly only by a small elite who understand how to operate these systems within the given regulations and with the license to do so. If your robowaifu is deemed a harm to the social fabric or someone's feelings you can bet they will do everything in their power to make it illegal, even in your own home which will be carefully watched by your smart toaster. On top of that they want AI to be accountable and traceable. They want access to everyone's data while preventing you from having access to any data. That's what they mean by privacy and data governance. They want you to need government clearance to get access to data in their 'ecosystem of trust'. Already many websites have made data scraping forbidden and difficult to do. Recently they've been trying to take down youtube-dl. >Member States are pointing at the current absence of a common European framework. The German Data Ethics Commission has called for a five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones. Denmark has just launched the prototype of a Data Ethics Seal. Malta has introduced a voluntary certification system for AI. Data Ethics Seal: https://eng.em.dk/news/2019/oktober/new-seal-for-it-security-and-responsible-data-use-is-in-its-way/ >It should be easier for consumers to identify companies who are treating customer data responsibly, and companies should have the opportunity to brand themselves on IT-security and data ethics. That is the goal with a new labelling system presented today. AI certification: https://www.lexology.com/library/detail.aspx?g=2e076f64-9f2d-4cf2-baed-335833692e77 >Malta has once again paved the way to regulate the implementation of systems and services based on new forms of technology by officially launching a national artificial intelligence (“AI”) strategy, making it also the first country to provide a certification programme for AI, the purpose of which is to “provide applicants with valuable recognition in the marketplace that their AI systems have been developed in an ethically aligned, responsible and trustworthy manner” as provided in Malta’s Ethical AI Framework. https://malta.ai/wp-content/uploads/2019/11/Malta_The_Ultimate_AI_Launchpad_vFinal.pdf
>>6229 (continued) >While AI can do much good, including by making products and processes safer, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. A regulatory framework should concentrate on how to minimise the various risks of potential harm, in particular the most significant ones. Damage to what property? Your guys predicted we won't own anything by 2030. Man these old fucks are sinister. To understand what they mean by the limitations to the freedom of expression, look at Twitter and listen to Jack Dorsey in the Section 230 hearing: https://www.youtube.com/watch?v=VdWbvzcMuYc Essentially if anything you say makes someone feel remotely unsafe or oppressed, your right to 'freedom of expression' is waived. It doesn't matter if it's true and backed up by evidence. If they suspect you are causing harm or violating their unelected rules, without evidence, they will silence you while doing nothing about those who are destroying your reputation or business. And robowaifus with breasts and thighs? Oh, the human dignity! Won't you think of the whamens? The objectification of the female form is perversion! And these robowaifus are too smart, you must dumb her down to respect the dignity of the mentally not-so-enabled. We can't have her doing all the jobs of the normies. That would make them feel useless and restless, and we can't have people with too much free time on their hands thinking they can actually use these systems to start their own independent farms and businesses with their own robots. >By analysing large amounts of data and identifying links among them, AI may also be used to retrace and de-anonymise data about persons, creating new personal data protection risks even in respect to datasets that per se do not include personal data. I've been saying this for years. There is no privacy anymore, not even on an anonymous imageboard. Everything we write and do has a unique fingerprint that can be picked up by AI, unless you're obfuscating your writing style with AI to look like someone else. The more data there is, the clearer that fingerprint becomes. >Certain AI algorithms, when exploited for predicting criminal recidivism, can display gender and racial bias, demonstrating different recidivism prediction probability for women vs men or for nationals vs foreigners. Who would've thought foreigners in the country illegally would be committing more crimes? Hm, only 2 nationals out of 10,000 go to jail for this but 200 out of 10,000 of these foreigners are committing the same crime, so we're only going to jail 2 of them to be fair. This is how justice in the UK works right now protecting child trafficking gang-members in the Religion of Peace. >When designing the future regulatory framework for AI, it will be necessary to decide on the types of mandatory legal requirements to be imposed on the relevant actors. Innovation? We don't have that word in Newspeak. The requirements: >training data; data and record-keeping; information to be provided; robustness and accuracy; human oversight; specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification. Why yes, your robowaifu will have to keep all her training and interaction data for possible government inspection. >To ensure legal certainty, these requirements will be further specified to provide a clear benchmark for all the actors who need to comply with them. >These requirements essentially allow potentially problematic actions or decisions by AI systems to be traced back and verified. This should not only facilitate supervision and enforcement; it may also increase the incentives for the economic operators concerned to take account at an early stage of the need to respect those rules. What a fucking nightmare. >To this aim, the regulatory framework could prescribe that the following should be kept: > accurate records regarding the data set used to train and test the AI systems, including a description of the main characteristics and how the data set was selected; > in certain justified cases, the data sets themselves; > documentation on the programming and training methodologies, processes and techniques used to build, test and validate the AI systems, including where relevant in respect of safety and avoiding bias that could lead to prohibited discrimination. You must not only hand over your code to the government but also fully documented as well and a devlog on how you created it and avoided bias and discrimination. :^) >Separately, citizens should be clearly informed when they are interacting with an AI system and not a human being. Kek, my shitting around chatting to people online with a chatbot will be a criminal offence in the future. >Requirements ensuring that outcomes are reproducible And they just wiped out 99% of AI using any sort of random sampling or online learning. Clearly who ever wrote this has no experience with developing AI themselves. How the fuck are you going to store all the data to do that?
>>6230 (continued) >Human oversight helps ensuring that an AI system does not undermine human autonomy or cause other adverse effects. >the output of the AI system does not become effective unless it has been previously reviewed and validated by a human (e.g. the rejection of an application for social security benefits may be taken by a human only); >the output of the AI system becomes immediately effective, but human intervention is ensured afterwards (e.g. the rejection of an application for a credit card may be processed by an AI system, but human review must be possible afterwards); This sounds like a good idea on paper, people even called for it in the Section 230 hearing, but what is actually being found in most AI decision systems made well is that 98% of the time human operators rejecting the conclusions and evidence provided by a system realize later they made a mistake, not the AI. Had they listened to the AI no issues would have occurred. Who would've thought human beings could be so flawed and ever make a bad decision in their life? >Particular account should be taken of the possibility that certain AI systems evolve and learn from experience, which may require repeated assessments over the life-time of the AI systems in question. Time for your robowaifu's monthly wrongthink check-up. >In case the conformity assessment shows that an AI system does not meet the requirements for example relating to the data used to train it, the identified shortcomings will need to be remedied, for instance by re-training the system in the EU in such a way as to ensure that all applicable requirements are met. Too bad, your robowaifu failed. Retrain her now or face the consequences. >The conformity assessments would be mandatory for all economic operators addressed by the requirements, regardless of their place of establishment. That means any independent individual trying to start their own small business. >Under the scheme, interested economic operators that are not covered by the mandatory requirements could decide to make themselves subject, on a voluntary basis, either to those requirements or to a specific set of similar requirements especially established for the purposes of the voluntary scheme. The economic operators concerned would then be awarded a quality label for their AI applications. The voluntary label would allow the economic operators concerned to signal that their AI-enabled products and services are trustworthy. It would allow users to easily recognise that the products and services in question are in compliance with certain objective and standardised EU-wide benchmarks, going beyond the normally applicable legal obligations. This would help enhance the trust of users in AI systems and promote the overall uptake of the technology. >While participation in the labelling scheme would be voluntary, once the developer or the deployer opted to use the label, the requirements would be binding. Just trust the mark, dumb fucks. :^) >Testing centres should enable the independent audit and assessment of AI-systems in accordance with the requirements outlined above. Independent assessment will increase trust Please, please trust us, dumb fucks. :^) Although this all sounds really bad, these guys are clearly scared shitless of AI and don't really understand how it works. That's why they want to control it so much, but the right of the People to keep and bear Robowaifus, shall not be infringed.
>>6231 Can we really fight them? In the best case there are 20 of us here. They can get some top tier data scientists to work on such a project. They can limit your moves legally. They already published what kind of barriers they are going to put. The best we can do is to find a way to pass through. If it was just us and them that would be fine but if things goes hot they will hire 100s of people to get ahead of us.

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports)

no cookies?