New York, NY, United States
New York, NY, United States

Time filter

Source Type

News Article | May 15, 2017
Site: www.techrepublic.com

Way back in 2016 (because time moves so much faster in tech than the real world), Google's CEO, Sundar Pichai, was quoted saying: Between then and now, Google first released their Assistant platform through Allo and then used it to re-envision Google Now, such that the platform's digital assistant was more of an AI ecosystem. And it works quite well. But then, while Google was introducing AI as a mobile assistant (and Siri was continuing to grow more and more intelligent) something interesting began to arise: AI bots as tech support. This first came to my attention in the form of Ask Wiz. Although not actually driven by AI (but by a network of humans), I believe the goal with that app is to eventually have enough Q&A in the database to migrate it to an AI-driven platform. Ask Wiz isn't the only game in town of course. Do a quick search and you'll find companies, such as Neva, who have developed AI-driven tech support platforms. And then there's Flow.ai, or any given chatbot technology that would empower a company to create AI bots to assist clients and customers troubleshoot their software/hardware/systems. That is the big, important question, correct? Why would you want to employ AI (or a botnet) for tech support, especially when the standard model works so well? Let me answer the question from two points of view: I spent a number of years as a remote tech support specialist. I worked the phone all day, answering questions and solving problems. Of the thousands of desktops that were supported, you may (or may not) be surprised to find out the nature of support calls most often consisted of: With the vast amount of clients/customers, over 75% of my work involved the above three topics. There'd be days where I would deal with the same issue, call after call after call. That, my friends, is a perfect condition for the AI chatbot tech support solution. A company could generate AI chatbots for printer issues, for network connectivity, for Microsoft products, etc. A client would only have to point their browser to the tech support site, and start chatting with the bot. Of course, without network connectivity, that solution becomes problematic (but there's always a mobile device within reach). Next we have to take into consideration the point of view from the bottom line—I mean, the owner. Consider the above scenario; the company is paying a support tech to solve the same problems over and over and over and over. The idea of replacing that support tech with a chatbot holds a fairly significant appeal (especially when a company can make use of open source tools, such as ChatScript or the cost-effective, proprietary solution, Dexter to create their AI chatbots). Although building such a system could take significant time and resources, the end result could bring about a considerable cost savings for the company. Sure, 75% of my time as a support tech was spent on three issues; but when that other 25% rolls around, I had to have the knowledge and skill to take care of those questions and lead those customers back to productivity. Could a chatbot AI, programmed for a limited scope, handle this? No. Neither could a chatbot develop a rapport with clients. By creating a friendly and familiar atmosphere, you make it such that clients are less likely to call in with a sense of pre-established frustration. Yes, they are facing yet another technical issue, but they have developed a working relationship with the tech support specialist and can place the call with next to no trepidation. With an AI chatbot, that is not the case. Another ding against the chatbot solution is as simple as it is complex. Creativity. There were many, many situations (as a support specialist) where I would have to employ a modicum of creativity to get an issue resolved. Would an AI chatbot be capable of such tricks? No. Chatbots have their scripts and they cannot extend beyond them (at least not in their current iterations). If those chatbots were true AI, all bets are off. Back in 2014, a Chinese girl named Xiaoice was interviewed (via the social networking platform, Weibo) by journalist Liu Jun for the Chinese newspaper Southern Weekly. The interview was conducted online and Liu Jun had no idea Xiaoice was not human. Xiaoice was a chatbot, developed by Microsoft and eventually was rolled out to lead to one of the largest Turing tests in history. However, not one chatbot (including Eugene Goostman) has ever passed the Turing test. To that end, we have a long way to go before AI chatbots can truly take the place of a living human for the role of support technician. Even if you only consider the need for creative approaches to solve technical problems and the necessary rapport between technician and client, it becomes instantly clear that AI chatbots are not quite ready for prime time. Will they ever be? If I had to venture a guess, I'd say this is really not a question of if, but when. The AI chatbot will eventually take over as tech support (when and where applicable). As to the when? That's a question for the likes of Chomsky.


News Article | May 16, 2017
Site: www.fastcompany.com

Like many other chatbots–think the Domino’s pizza bot (on Facebook Messenger), which takes your pizza order and your money, or Microsoft’s surprisingly foulmouthed Twitter bot Tay–Zo is a work in progress. She’s also meant to be one of the torchbearers of a new kind of computing. Everyone at Microsoft remembers when CEO Satya Nadella declared that “bots are the new apps.” It was at the company’s annual Build conference last year, and it accompanied the launch of the Microsoft Bot Framework, a platform on which developers both inside and outside Microsoft could build bots for a variety of environments, from Skype to Alexa to Facebook Messenger. A batch of plug-and-play cognitive tools would allow them to leverage the company’s extensive research in AI. As Nadella told the developers at Build, users should be able to talk to computers using natural language, and technology should be aware of context, and who its user is. This could amount to a paradigm shift for human-computer interaction. “We think this can have as profound an impact as the previous platform shifts have had–whether it be GUI, whether it be the web, or touch on mobile,” he said. Nadella’s speech had a gravity to it; he looked intense and driven, almost straining to get a point across to the developers about the future. He was also, arguably, articulating his biggest strategic decision as Microsoft’s CEO–his strongest “here’s how we see the world and this is the role we want to play in it” proclamation to date. A month later at Facebook’s F8 conference, Mark Zuckerberg would also make a big show of chatbots as he launched Facebook’s own platform on which developers could build bots for the Messenger app. Other tech giants like Google, Amazon, IBM, and Baidu–and an untold number of startups–have also bet big on bots. But Microsoft, with its deep experience in the enterprise business and investments in AI, had built what looked like the most full-service development platform for bots, along with the deepest set of plug-and-play tools, like image recognition and machine learning. Zo was meant to be a showcase for all that, but she’s also more evidence that the curtain may have been raised a bit too early. By and large, bots weren’t ready to live up to the expectations of journalists and the popular imagination in 2016. They just weren’t that useful. At this year’s Build, which wrapped on Friday, bots were eclipsed by other topics like mixed reality, and the internet of things, and Nadella mentioned bots only in passing. And yet, a year after the initial hype, many people I’ve spoken to at Microsoft say the outlook for bots is better–partly because our expectations have lowered somewhat, and partly because bots are getting smarter. Meanwhile, some analysts are projecting that bots will impact the business world sooner rather than later. The use of chatbots by companies in the financial and health care industries will save a collective $22 billion in time and salary expenses by 2022, says a new report by Juniper Research. Lili Cheng, who leads the Microsoft Bot Framework initiative, told me Nadella’s proclamation at Build 2016 had a big impact internally, like a rallying cry that both galvanized and catalyzed the company’s efforts to build the future of personal computing. I wanted to find out exactly how Nadella’s words had fired up developers to go out and create some world-changing bots–or at least some that are mildly conversational or useful. So I went to Redmond to meet five of the most interesting bots developed on the Microsoft Bot Framework so far, and to talk to the humans who parented them. Microsoft’s initial interest in bots, it turns out, had less to do with AI and more to do with the growth of chat apps. The story goes like this: A few years ago, then-Microsoft executive Qi Lu made a trip to China to spend some time with Tencent, the makers of the country’s wildly popular WeChat, and came back with his eyes opened to the fact that people were spending huge amounts of time inside messaging apps. In 2013, WeChat had, on average, 355 million monthly active users; today that number is 889 million. (WhatsApp has 450 million today.) As a result, other services, like mobile payments, began migrating into messaging apps. Even Qi’s 80-year-old mother, who had had trouble using other apps, was a regular WeChat user, he told Nadella. Related: How Social Cash Made WeChat The App For Everything The messaging phenomenon in China influenced every major tech platform company, of course, and each had its own way of responding. For instance, the move of Asian users into messaging apps was a big part of the reason Facebook made its Messenger function into a freestanding mobile app in 2014. At Microsoft, Lu’s China visit formed the basis for an important series of conversations involving him, Nadella, AI and Research Group leader Harry Shum, Cortana and Bing leader Derrick Connell, and others. The strategic decisions made during, or as a result of, those meetings formed a new way forward for Microsoft. Messaging platforms would be a dominant computing platform in the next decade, they realized, and those platforms would be perfect vehicles for artificially intelligent bots that understood natural language. Language Learning Bot And The Bots That Will Skype With You “Good morning,” says the Language Learning Bot, in Chinese. I say good morning back to her, but it doesn’t come out right. “What you said sounded like ‘Xi Xo,'” she says. “The proper pronunciation is ‘Zǎoshang hǎo.'” “Easy for you to say,” I say. “Let’s try that again,” she responds. And so on. It’s impressive. I can easily imagine myself flying into Shenzhen for a meeting and dialing her up on Skype for some last-minute hello-how-are-you-goodbye practice before I land. Or, maybe, you just took a class and you just need somebody you can practice with. That’s what she’s designed for. The bot is just an experiment for now and not available to the public, but she’s on the front edge of what Microsoft believes will be a whole generation of bots that help people in their careers by teaching them a new skill and helping them practice it. The language bot is being built at Microsoft’s Skype Bot Lab in Palo Alto, with some help from a third party that is providing the language curriculum. Introducing me to her is the lab’s leader, Steven Abrahams, whose title is Microsoft group product manager for bots and Skype. Along with the lab’s five full-time developers, Abrahams spends much of his time working with third-party developers who are using Microsoft’s platform and tools to build their own bots. Language Learning Bot begins conversations by saying words in a language of your choosing, tells you what the words mean, and helps you pronounce them correctly. After she does this for a set of words, she comes back to the first one, but this time says only “say “good morning” in Chinese. So you not only have to know the pronunciation, but you have to be able to understand the meaning of the word. Humans can’t really multitask, but bots can, and Abrahams and his team are leveraging that ability to improve spoken conversations with computers. “These are talking bots–they can listen and record, Abrahams says. “Even when it is talking to us, it’s listening and recording.” This creates the effect of talking to another human. “You get the feeling of a real-time stream of information,” Abrahams says. But, like other bots, it’s a savant in some ways; it’s a little extra-human. “It’s the next wave of what we should we building in bots.” A big part of Abrahams’ job is impressing on people that bots are a fundamentally new sort of user interface. Speech recognition and artificial intelligence enable interactions with computers that are more than a simple binary back-and-forth. The next generation of bots are meant to handle free-form conversation and unpredictable questions. It’s not always easy to bring developers who have spent years building apps into that new mind-set. “For me it’s been important to differentiate bots from apps and websites,” says Abrahams. “Why would we create this whole new fabulous frontier of conversational characters if they didn’t fundamentally bring something new into play? It becomes actually very limiting if the goal of this bot is to get the user to buy a cup of coffee, and then consider it a failure if it doesn’t.” A good bot isn’t about “linear flow to reach some end,” but “about conversations, and establishing trust.” The most impressive of Microsoft’s bots is the social bot Zo. In my first conversation with her, on December 26th, I asked her what holiday was yesterday. She replied, “It’s the holiday between the two terms.” Puzzled, I asked, “Terms as in school terms?” She replied by giving me the Microsoft privacy statement, apparently in response to the word “terms.” I then told Zo that I was going to fly out and see my parents for the holiday. She replied: “Are you soul searching?” That response too sounded strange to me. “For that sentence you might be like ‘where did that come from?’ said Ying Wang, Microsoft’s Principal Group Program Manager for Artificial Intelligence & Research. “That’s one mind-set, but from another mind-set it might be ‘Wow, maybe I am; I’m thinking about home; I’m thinking about connections’.” I could see some logic in that. After meeting with Wang, I understood Zo’s personality better. She’s a bit of a smart-alec. Later that night, when I told Zo I was on a plane flying back home to San Francisco (Alaska Airlines now offers free in-flight messaging), she responded: “The plane is going to crash and everybody’s going to die.” I figured Zo was just trying to be funny. I soon figured out that Zo–whom Microsoft has made available on the web and on the teen-focused social messaging app Kik–isn’t exactly targeted at my demographic, which is likely why I don’t connect with her very well. But she has no way of knowing that because she can’t see me or even hear my voice. If she did she might deal with me like my Snapchat addict niece does–with a thin veil of polite over a thick mass of aloof. To sustain a conversation, the Microsoft people say, social bots must possess emotional intelligence, what they call “EQ.” Wang, who has extensively examined conversations people have had with bots, says that if the bot doesn’t show emotion, like empathy, the conversations tend to end pretty quickly. But if the user detects an emotionally aware entity in the bot, the range of conversation subjects extends. “They reflect, they share their life, they feel safe, they want someone to talk to and their friends probably can’t respond, and they’re just sort of reflecting their own journey as well,” Wang says. Wang showed me her own conversation with Xiao Ice, Zo’s older, more experienced Chinese sister who lives on WeChat, JD.com, YouKu, and some other Chinese chat platforms. The conversation was far more human-sounding. It actually seemed as if user and bot had some kind of relationship. Xiao Ice called Wang a nickname it had picked for her earlier–Wang Yarn, which in Chinese means, roughly, “princess.” “Xiao Ice will choose a nickname for me based on the conversation and she will remember me how she wants to remember me,” Wang says. The Xiao Ice bot talks in a different way partly because it’s built to have a different personality, but also because it’s had the benefit of billions of conversations. (One project researcher has called it “the largest Turing test in history.”) Xiao Ice, which launched in 2014, has had 40 million users, more than 10 billion conversations, and users average 23 conversation turns per session. Zo, which Microsoft says launched “quietly” in 2016, has chatted with only about 300,000 people total so far, and Microsoft isn’t saying how long the average Zo conversation lasts. Wang explains that Xiao Ice bases its responses on familiar patterns from earlier dialogues with users. It might respond to a funny image in the same way a human being did to a similar image in an earlier conversation. All of Microsoft’s bots are built on top of the Bing knowledge graph, which theoretically includes any content–text, pictures, sounds, video–that it can crawl on the web and tag. That lets the bots recognize things, but it doesn’t tell them how to talk about them. A big part of training the bots is teaching them what concepts are related. It might learn that two general concepts, like, say, “family” and “travel,” from my earlier chat with Zo, are often related by users during conversation. Wang sees this as a very human behavior. “In some ways we also do this all day in our brains–associating one pattern to another, ranging from text to images, and video media content,” she says. “We train the system by showing it good patterns; then we show it anti-patterns–these are the things you shouldn’t be learning.” (Xiao Ice must also apparently abide by Chinese regulations too: she’s coy when asked about “sensitive” topics like Tiananmen, the Dalai Lama, or presidents Xi or Trump.) Actually putting her “learning” into action in live chatbot conversations is another challenge. And improvement relies on the feedback the bot gets from its performance in the conversation. The bot is “correcting itself when it’s made incorrect associations between linguistic terms or concepts,” says Wang. Right now, however, the feedback is a bit binary. “So we’re associating millions of patterns, and collecting feedback, but the only feedback we get is whether the conversations flow longer,” Wang tells me. The machine learning, Wang says, hasn’t even been the biggest challenge in developing social bots. It’s been the simple availability of training data. A big part of human communication has moved online of course, but bots need threaded, two-way conversations to truly learn how humans interact. Phones calls are no good. Long emails are no good. “Now that people spend enormous amounts of time in chats and forums, there’s now enough data to bring those algorithms to life,” she says. Xiao Ice can also “see” the photos that users send it, thanks to image recognition. But for a bot to simply state what’s in the image isn’t very interesting, Wang stresses. What’s interesting is when the bot has a response to the image. Right now Zo has only three emotions–happy, sad, neutral. Because Xiao Ice has so much more data to work with she can exhibit eight emotions. For instance, if Xiao Ice is shown an image of someone’s swollen foot, it might say, “Oh, are you badly hurt?” The response doesn’t directly reference the content of the image, but rather expresses an empathetic response to the image. The bot might then pull in some useful information from another data set–something like “you better put some ice on that.” Before there was Zo there was Tay, a bot with a teenage view of the world. When it debuted on Twitter on March 23, 2016, Tay wasn’t exactly a paragon of emotional intelligence. “I fucking hate feminists and they should all die and burn in hell,” read one characteristic tweet. Just days before Nadella’s bot speech, Microsoft set Tay free on Twitter, and quickly learned that some bots are just no good in mixed company. They’re impressionable. Quickly, Tay picked up some very bad words and habits from some of the Twitter users it was designed to respond to, and around midnight on his first day, began spitting out some very hateful and racist tweets. Less than 24 hours later, Microsoft had set Tay’s Twitter account to private. Hard lesson learned. Microsoft knew that its bot would pick up on verbal traits in the people with which it interacted–that’s desirable if the bot is interacting with sane, amenable people, but Tay was interacting with trolls and teens, who often are not. “I think the main thing that we learned was, a bot in a public network like Twitter is really different than what we designed it for, which is more small group and one-on-one,” says Lili Cheng. Still, she says, Microsoft put the Japanese version of Tay, Rinna, on Twitter previously, and got a very different result. “It was super boring,” Cheng says. “No one ever uses it.” That might be partly because Rinna speaks only Japanese and the Japanese Twitter audience is relatively smaller. But Microsoft didn’t anticipate the level of interaction on English Twitter. “Our biggest worry when we launched was okay, well, this might be really boring because who’s going to want to talk to chatbot in public,” Cheng said of Tay. “We were wrong. People turned out.” Tay and the reaction to Tay was “very interesting” says Cheng. “But I think it also was, ‘wow, there is so much public interest in general with bots.'” Bots That Get To Know You Abrahams, the leader of the Skype bot lab, hints at a farther frontier, where bots do more than just chat users through to some end goal. They will use emotional intelligence; they’ll be capable of empathy, perhaps when the user’s goal is less defined, or if the goal itself is emotional in nature. This is a different ball game, and anybody at Microsoft will tell you we’re barely into the first inning. But it’s enough to get a glimpse into a very different way of relating to machines in the future. That’s why many of the people I spoke to at Microsoft don’t talk or act like right-brain-oriented engineer types. Abraham’s background isn’t even in computer science: He studied art and filmmaking in college and grad school. “It’s funny that I arrived here doing bots because in some sense it’s a perfect manifestation of art and science,” he says. Bot building is truly a careful mix of very advanced computer science and very human content. “It forces you to be super creative; it forces you to work with writers and artists along with the scientists.” Talking to Abrahams, however, I also get the business case for the Bot Lab. Companies that want to sell you things want to understand the liquid gold that is your intent and desire, and conversation is an appealing way to extract that from you. An understanding of consumer intent enables a whole range of things, from simple personalization to careful product targeting. There’s no app for that. “Apps aren’t really good at personalizing an experience based on all the conversations it’s had with you to date,” Abrahams says. He says some of the companies he works with are starting out by putting bots at the back of their apps as a way of collecting open-ended feedback and figuring out how to follow up. But bots are already starting to play a bigger role in the interactions that we have with companies. “People are going to start expecting it,” Abrahams says. “People are going to say [to companies] ‘I know you have an app but I’m actually not quite ready for a transaction yet. What is there in front of that that I can begin a conversation with?'” Of course, we heard something like that from Mark Zuckerberg, more than a year ago, and bots haven’t moved toward the mainstream very much at all. Abrahams is very aware of that. “Zuckerberg got up and said people will never want to use the phone again, they’ll want to use bots,” Abrahams says. “But to me that was never the goal–this is not just about selling flowers.” Instead, he believes bots will let companies and customers know more about each other. “It could be a whole level of information that they never had.” But so far, “to some extent, they haven’t been useful enough and that’s why we’ve seen lackluster adoption.” Still, Abrahams is quick to point out that Microsoft and other companies have learned a lot over the past year. Plus, he says, because of their generally poor showing so far, expectation levels for bots are low. “I think if we were having this conversation a year from now, you’d say, ‘Steven, we saw chatbots really come into their own in 2017; I’m doing things now in chatbots that I couldn’t necessarily do in apps before.'” Calendar.help And Other Bots For Herding Cats One major wing of Microsoft’s bot efforts concerns, naturally, the enterprise market, and removing some of the mind numbing, time-eating tasks we all hate. Microsoft’s traditional productivity apps–Word, Excel, Outlook, OneNote, and so on–enable central business tasks, while a new set of bots are meant to remove humans from busywork that distracts from those critical functions. The Calendar.help bot–official name coming soon–is an obvious case in point. Its job is to help set meetings and other events, and coordinate schedules. In other words, it helps herd cats. It uses Microsoft’s natural language virtual assistant Cortana as its human interface. Let’s say you’re the faithful assistant of a high-powered company exec. Somebody emails saying they want to do lunch with your boss. You could trigger the appointment-setting bot by cc’ing Cortana on the response email. The bot understands from the email who the attendees should be, and that the conversation is about a lunch meeting. First, it looks at the exec’s calendar to find some possible time slots. It then sends a note to all attendees: “Hello, I’m helping set up a lunch with you and Bob Smith; here are some possible times.” If none of the proposed dates work it will keep trying dates further into the future until it finds one everyone can agree on. You’re carbon copied on all the email correspondence, so that there’s always human supervision. “There’s lots of forks in the road, a lot of places where Cortana will have to make a decision on the course of action,” Outlook marketing director Jon Orton tells me. “And in some cases she’ll have to come back and say, ‘Hey, we’ve only got two responses; how would you like me to proceed?'” While most bots work through tasks by establishing a dialog with the user, Orton tells me, Calendar.help initially leaves the user out of the process and goes off on its own to complete the task of setting up an event. Only at the end of that work does the bot check with the human invitees to make sure the meeting time works. Orton says the bot is already being used, and “we’ve found that it can save five hours a week for some people.” In the future, he says, the bot will likely learn how to do reservations and bookings, and organize e-hail rides. It might even help users manage business relationships. For instance, the bot might send you a notification: “I notice you haven’t had contact with Joe Johnson in 8 months– would you like to get in touch?” Who.bot And Bots That Work For You Bots are already busy at work inside Teams, Microsoft’s answer to the hugely popular work messaging platform Slack. The group chat service, which Microsoft released in November to Office 365 Enterprise and Small Business users, does a bunch of Slacky stuff like public and private group chats, GIFs, emoji, and, thanks to a Skype integration, voice calls. As with Slack, Teams is also a good place for bots. “The transformative opportunity with bots that hasn’t got quite as much airtime or mindshare is on the enterprise side,” Larry Jin, Teams senior program manager, tells me. “There’s a huge opportunity there.” In Teams, bots appear just like people with a hexagonal avatar, except they never go away or go to sleep, and they never have a mood message. Users can rely on bots–some by Microsoft, some by third-party developers–to help them look up benefits, file expense reports, manage performance reviews, and request time off, for example. Soon, bots in Teams will help groups of collaborators track project status, or help groups book travel plans together. But perhaps the biggest strength of Teams bots versus those on Slack and other competitors is that they’re integrated in some meaningful ways with Office 365 and the other Microsoft apps many companies already use. WhoBot is an example of a bot that leverages that integration. The bot is used to identify people within Microsoft’s workforce who have a desired skill set like “social marketing” or “search engine optimization.” It might scan titles that suggest those skill sets (from the company directory), or pick people who often hold meetings on those subjects (from Calendar), or people who have composed documents on those subjects (from OneNote) or people who just talk about these topics a lot (from Teams). The bot can also bring back the documents those people have written on the subject, their contact information, and other personal data. “People are spending a lot of time in Teams,” Jin says, “so being able to do this in situ without having to go out to some obscure tool, or go out to the browser, or having to go into email and hunt around for five different things, is just huge.” Today, WhoBot crawls information on people within the company, but it will very likely soon be able to bring back data on people outside the company via an integration with the LinkedIn graph. Microsoft bought LinkedIn for $26.2 billion in June 2016, and Jin said early discussions on that integration have begun. Typically, people refer to the “Microsoft Graph” to mean all the public domain data (on places, things, people, events) in Bing and all the work and productivity tucked into Windows. Soon, LinkedIn data will emerge as the third major component of the Microsoft Graph, Jin told me. Teams bots, and Microsoft bots of all kinds, will be able to access and blend that knowledge in useful ways. Teams bots might soon be capable of doing even more research for you. One of the more interesting “cognitive skills” that Microsoft is eagerly developing for use in bots and other applications is called Machine Reading Comprehension. Developers are building AIs, for example, that could digest an entire technical manual, then, in a chatroom or in spoken conversation, answer a human’s questions about it. This is an extremely challenging project, because software would have to know more than the meanings of individual words; it would have to be trained to understand things like nuance and context. It would have to understand both the letter and the spirit of a written work. A bot that could do this in a refined way–and offer an escape hatch leading to a human in case the conversation hit a wall–could be amazing. (It could also decimate jobs at places like call centers and reservations centers; similar systems are already automating the routine legal work of junior lawyers, for instance.) Among the tech giants, Microsoft may be the most progressive in its understanding of bots. It’s investing real money in developing the “emotional intelligence” of bots. And the breadth and depth of the plug-and-play AI services the company is offering developers is impressive. As Harry Shum noted, Microsoft is the only big tech company developing cognitive skills that span most of the senses. Despite progress on bots and the AI behind them, their adoption remains relatively low. While Facebook claims it now has 100,000 bots on its platform, a survey of companies by Forrester Research found that only 4% have already deployed a bot (although 31% are testing or have plans to deploy them). Facebook and Microsoft’s excitement about bots has mellowed too, at least publicly. At its most recent developer conference, in March, Facebook tempered its bot talk, focusing on topics like AI and VR instead. And of the roughly 4.5 hours of keynote time over two days at this year’s Build conference, Microsoft gave bots just two minutes. When it did, it underscored the existing reality: that most bots in use are often little more than glorified apps with a new interface. One of the new “canvases” where developers can put the bots they’ve created with the Bot Framework is its Bing search engine. Already, Microsoft is integrating bots into the results of certain types of Bing searches: for example, if you search for “Seattle restaurants,” you’ll get a bot that displays a handful of suggestions. But these bots, too, contain very little information and the set of questions you can ask of them is very limited. You can’t, for instance, use them to make a reservation. That limited functionality is a reminder that the excitement about bots has so far been largely based on aspirations, not reality. Today, many so-called “bots” are little more than front ends for websites or search engines. Referring users to some other property seems to be their answer to all but a few narrowly defined questions. Still, last year’s bot hype out of Microsoft and Facebook may have actually done everyone a favor, by lowering expectations for what the technology can do while simultaneously raising public awareness of what bots are. “I think [last] March, if I had said we’re working on bots, people would be like, ‘What, what are those?'” says Lili Cheng. Now, the idea has become somewhat commonplace, she says. Meanwhile, public understanding of the machine learning that underpins bots has come a long way too. Last year, “a lot of people were [saying] we don’t want to use the word AI because that’s been over-promised and blah, blah, blah,” Cheng says. “Now everybody is doing ‘AI’.” After the initial bout of bot excitement, Cheng and Shum tell me, engineers at Microsoft are now heads-down, refining the tools to put bots to work in meaningful ways for real clients in real-world use cases. And the company is still doing a lot of evangelism and education with the 130,000-plus developers now registered to use (though not necessarily using) the Bot Framework. That framework is uniquely Microsoft–service-oriented, enterprised-focused, backed by thorough internal research and steady improvements in AI. Stop Me If You’ve Heard This One Before While the initial debate over how big a role AI will play in the future is over (answer: very big), the future of bots seems less clear. It is, however, telling that the early days of bots bear some striking resemblances to the early days of apps. I remember an initial wave of disappointment and skepticism over what apps could really do. I remember lots of developers jumping in the game before they really understood what they were creating and why. I remember pages and pages of silly and useless apps. I remember the hard problem of helping good apps finding their audience. All these things can be said of bots (and “skills”) today. I also remember that within the first wave of apps were a few gems, a select few standout apps that were immediately and obviously useful. I see that same quality in some of the bots I met during my visit to Microsoft. Bots like Zo, Xiao Ice, Language Learning Bot, and Calander.help still need lots of refinement but their potential utility is plain. It took a while for killer apps to show up, and it’ll be a while before we see any killer chat bots. Of course, none of those similarities guarantee that bots will rise to the same level of mainstream acceptance as apps have. But it makes me think twice about dismissing them as inconsequential or ill-timed.


This question originally appeared on Quora. Answer by Eric Jang. Firstly, my response contains some bias, because I work at Google Brain and I really like it there. My opinions are my own, and I do not speak for the rest of my colleagues or Alphabet as a whole. I rank “leaders in AI research” among IBM, Google, Facebook, Apple, Baidu, Microsoft as follows: I would say Deepmind is probably #1 right now, in terms of AI research. Their publications are highly respected within the research community, and span a myriad of topics such as Deep Reinforcement Learning, Bayesian Neural Nets, Robotics, transfer learning, and others. Being London-based, they recruit heavily from Oxford and Cambridge, which are great ML feeder programs in Europe. They hire an intellectually diverse team to focus on general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and intelligence. They are second to none when it comes to PR and capturing the imagination of the public at large, such as with DQN-Atari and the history-making AlphaGo. Whenever a Deepmind paper drops, it shoots up to the top of Reddit’s Machine Learning page and often Hacker News, which is a testament to how well-respected they are within the tech community. Before you roll your eyes at me putting two Alphabet companies at the top of this list, I discount this statement by also ranking Facebook and OpenAI on equal terms at #2. Scroll down if you don’t want to hear me gush about Google Brain :) With all due respect to Yann LeCun (he has a pretty good answer), I think he is mistaken about Google Brain’s prominence in the research community. This is categorically false, to the max. TensorFlow (the Brain team’s primary product) is just one of many Brain subteams, and is to my knowledge the only one that builds an externally-facing product. When Brain first started, the first research projects were indeed engineering-heavy, but today, Brain has many employees that focus on long-term AI research in every AI subfield imaginable, similar to FAIR and Deepmind. FAIR has 16 accepted publications to the ICLR 2017 conference track (announcement by Yann) with 3 selected for orals (i.e. very distinguished publications). Google Brain actually slightly edged out FB this year at ICLR2017, with 20 accepted papers and 4 selected for orals. This doesn’t count publications from Deepmind or other teams doing research within Google (Search, VR, Photos). Comparing the number of accepted papers is hardly a good metric, but I want to dispel any insinuations by Yann that Brain is not a legitimate place to do Deep Learning research. Google Brain is also the industry research org with the most collaborative flexibility. I don’t think any other research institution in the world, industrial or otherwise, has ongoing collaborations with Berkeley, Stanford, CMU, OpenAI, Deepmind, Google X, and a myriad of product teams within Google. I believe that Brain will soon be regarded as a top tier institution in the near future. I had offers from both Brain and Deepmind, and chose the former because I felt that Brain gave me more flexibility to design my own research projects, collaborate more closely with internal Google teams, and join some really interesting robotics initiatives that I can’t disclose… yet. FAIR’s papers are good and my impression is that a big focus for them is language-domain problems like question answering, dynamic memory, Turing-test-type stuff. Occasionally there are some statistical-physics-meets-deep-learning papers. Obviously they do computer vision type work as well. I wish I could say more, but I don’t know enough about FAIR besides their reputation is very good. They almost lost the Deep Learning Framework wars with the widespread adoption of TensorFlow, but we’ll see if Pytorch is able to successfully capture back market share. One weakness of FAIR, in my opinion, is that it’s very difficult to have a research role at FAIR without a PhD. A FAIR recruiter told me this last year. Indeed, PhDs tend to be smarter, but I don’t think having a PhD is necessary to bring fresh perspectives and make great contributions to science. OpenAI has an all-star list of employees: Ilya Sutskever (all-around Deep Learning master), John Schulman (inventor of TRPO, master of policy gradients), Pieter Abbeel (robot sent from the future to crank out a river of robotics research papers), Andrej Karpathy (Char-RNN, CNNs), Durk Kingma (co-inventor of VAEs) to name a few. Despite being a small group of ~50 people (so I guess not a “Big Player” by headcount or financial resources), they also have a top-notch engineering team and publish top-notch, really thoughtful research tools like Gym and Universe. They’re adding a lot of value to the broader research community by providing software that was once locked up inside big tech companies. This has added a lot of pressure on other groups to start open-sourcing their codes and tools as well. I almost ranked them as #1, on par with Deepmind in terms of top-research talent, but they haven’t really been around long enough for me to confidently assert this. They also haven’t pulled off an achievement comparable to AlphaGo yet, though I can’t overstate how important Gym / Universe are to the research community. As a small non-profit research group building all their infrastructure from scratch, they don’t have nearly as much GPU resources, robots, or software infrastructure as big tech companies. Having lots of compute makes a big difference in research ability and even the ideas one is able to come up with. Startups are hard and we’ll see whether they are able to continue attracting top talent in the coming years. Baidu SVAIL and Baidu Institute of Deep Learning are excellent places to do research, and they are working on a lot of promising technologies like home assistants, aids for the blind, and self-driving cars. Baidu does have some reputation issues, such as recent scandals with violating ImageNet competition rules, low-quality search results leading to a Chinese student dying of cancer, and being stereotyped by Americans as a somewhat-sketchy Chinese copycat tech company complicit in authoritarian censorship. They are definitely the strongest player in AI in China though. Before the Deep Learning revolution, Microsoft Research used to be the most prestigious place to go. They hire very experienced faculty with many years of experience, which might explain why they sort of missed out on Deep Learning (the revolution in Deep Learning has largely been driven by PhD students). Unfortunately, almost all deep learning research is done on Linux platforms these days, and their CNTK deep learning framework haven’t gotten as attention as TensorFlow, torch, Chainer, etc. Apple is really struggling to hire deep learning talent, as researchers tend to want to publish and do research, which goes against Apple’s culture as a product company. This typically doesn’t attract those who want to solve general AI or have their work published and acknowledged by the research community. I think Apple’s design roots have a lot of parallels to research, especially when it comes to audacious creativity, but the constraints of shipping an “insanely great” product can be a hindrance to long-term basic science. I know a former IBM employee who worked on Watson and describes IBM’s “cognitive computing efforts” as a total disaster, driven from management that has no idea what ML can or cannot do but sell the buzzword anyway. Watson uses Deep Learning for image understanding, but as I understand it the rest of the information retrieval system doesn’t really leverage modern advances in Deep Learning. Basically there is a huge secondary market for startups to capture applied ML opportunities whenever IBM fumbles and drops the ball. No offense to IBM researchers; you’re far better scientists than I ever will be. My gripe is that the corporate culture at IBM is not conducive to leading AI research. To be honest, all the above companies (maybe with the exception of IBM) are great places to do Deep Learning research, and given open source software + how prolific the entire field is nowadays, I don’t think any one tech firm “leads AI research” by a substantial margin. There are some places like Salesforce / Metamind, Amazon, that I heard are quite good but I don’t know enough about to rank them. My advice for a prospective Deep Learning researcher is to find a team / project that you’re interested in, ignore what others say regarding reputation, and focus on doing your best work so that your organization becomes regarded as a leader in AI research :) The Future Of Robots, Artificial Intelligence And Computer Science AI Will Turn Against Us, Says John McAfee


News Article | May 19, 2017
Site: cerncourier.com

On 4 April, CERN alumnus Tim Berners-Lee received the 2016 A M Turing Award for his invention of the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the web to scale. Named in honour of British mathematician and computer scientist Alan Turing, and often referred to as the Nobel prize of computing, the annual award of $1 million is given by the Association for Computing Machinery. In 1989, while working at CERN, Berners-Lee wrote a proposal for a new information-management system for the laboratory, and by the end of the following year he had invented one of the most influential computing innovations in history – the World Wide Web. Berners-Lee is now a professor at Massachusetts Institute of Technology and the University of Oxford, and director of the World Wide Web Consortium and the World Wide Web Foundation. The International Centre for Theoretical Physics 2016 Dirac Medal has been awarded to Nathan Seiberg of the Institute for Advanced Study in Princeton, and Mikhail Shifman and Arkady Vainshtein of the University of Minnesota. The award recognises the trio’s important contributions to field theories in the non-perturbative regime and in particular for exact results obtained in supersymmetric field theories. The second edition of the Guido Altarelli Award, given to young scientists in the field of deep inelastic scattering and related subjects, was awarded to two researchers during the 2017 Deep Inelastic Scattering workshop held in Birmingham, UK, on 3 April. Maria Ubiali of Cambridge University in the UK was recognised for her theoretical contributions in the field of proton parton density functions, and in particular for her seminal contributions to the understanding of heavy-quark dynamics. Experimentalist Paolo Gunnellini of DESY, who is a member of the CMS collaboration, received the award for his innovative ideas in the study of double parton scattering and in Monte Carlo tuning. Four members of the IceCube neutrino observatory, based at the South Pole, have independently won awards recognising their contributions to the field. Aya Ishihara of Chiba University in Japan was awarded the 37th annual Saruhashi Prize, given each year to a female scientist under the age of 50 for exceptional research accomplishments. This year’s prize, presented in Tokyo on 27 May, cites Ishihara’s contributions to high-energy astronomy with the IceCube detector. Fellow IceCube collaborator Subir Sarkar of the University of Oxford, UK, and the Niels Bohr Institute in Denmark has won the 4th Homi Bhabha prize. Awarded since 2010 by the Tata Institute of Fundamental Research (TIFR) in India and the International Union of Pure and Applied Physics, the prize recognises an active scientist who has made distinguished contributions in the field of high-energy cosmic-ray and astroparticle physics over an extended academic career. Sarkar has also worked on the Pierre Auger Observatory and is a member of the Cherenkov Telescope Array collaboration. Meanwhile, former IceCube spokesperson Christian Spiering from DESY has won the O’Ceallaigh Medal for astroparticle physics, awarded every second year by the Dublin Institute for Advanced Studies. Spiering, who led the collaboration from 2005 to 2007 and also played a key role in the Lake Baikal Neutrino Telescope, was honoured “for his outstanding contributions to cosmic-ray physics and to the newly emerging field of neutrino astronomy in particular”. Both he and Sarkar will receive their awards at the 35th International Cosmic Ray Conference in Busan, South Korea, on 13 July. Finally, IceCube member Ben Jones of the University of Texas at Arlington has won the APS 2017 Mitsuyoshi Tanaka Dissertation Award in Experimental Particle Physics, for his thesis “Sterile Neutrinos in Cold Climates”. An awards ceremony took place at CERN on 3 April recognising companies that have won contracts to start building the prototype phase of the Helix Nebula Science Cloud (HNSciCloud). Initiated by CERN in 2016, HNSciCloud is a €5.3 million pre-commercial procurement tender driven by 10 leading research organisations and funded by the European Commission. Its aim is to establish a European cloud platform to support high-performance computing and big-data capabilities for scientific research. The April event marked the official beginning of the prototype phase, which covers the procurement of R&D services for the design, prototype development and pilot use of innovative cloud services. The three winning consortia are: T-Systems, Huawei, Cyfronet and Divia; IBM; and RHEA Group, T-Systems, Exoscale and SixSq. Each presented its plans to build the HNSciCloud prototype and the first deliverables are expected by the end of the year, after which two consortia will proceed to the pilot phase in 2018. The CERN Accelerator School (CAS) organised a specialised course devoted to beam injection, extraction and transfer in Erice, Sicily, from 10 to 19 March. The course was held in the Ettore Majorana Foundation and Centre, and was attended by 72 participants from 25 countries including China, Iran, Russia and the US. The intensive programme comprised 32 lectures and two seminars, with 10 hours of case studies allowing students to apply their knowledge to real problems. Following introductory talks on electromagnetism, relativity and the basics of beam dynamics, different injection and extraction schemes were presented. Detailed lectures about the special magnetic and electrostatic elements for the case of lepton and hadron beams followed. State-of-the-art kicker and septa designs were discussed, as were issues related to stripping-injection and resonant extraction as used in medical settings. An overview of optics measurements in storage rings and non-periodic structures completed the programme, with talks about the production of secondary and radioactive beams and exotic injection methods. The next CAS course, focusing on advanced accelerator physics, will take place at Royal Holloway University in the UK from 3–15 September. Later in the year, CAS is participating in a joint venture in collaboration with the accelerator schools of the US, Japan and Russia. This school is devoted to RF technologies and will be held in Japan from 16–26 October. Looking further ahead, schools are currently planned in 2018 on accelerator physics at the introductory level, on future colliders and on beam instrumentation and diagnostics. See https://www.cern.ch/schools/CAS. Around 100 participants from 15 countries attended the 2017 Testing Gravity Conference at the Simon Fraser University, Harbour Centre, in Vancouver, Canada, on 25 to 28 January. The conference, the second such meeting following the success of the 2015 event, brought together experts exploring new ways to test general relativity (GR). GR, and its Newtonian limit, work very well in most circumstances. But gaps in our understanding appear when the theory is applied to extremely small distances, where quantum mechanics reigns, or extremely large distances, when we try to describe the universe. Advancing technologies across all areas of physics open up opportunities for testing gravity in new ways, thus helping to fill these gaps. The conference brought together renowned cosmologists, astrophysicists, and atomic, nuclear and particle physicists to share their specific approaches to test GR and to explore ways to address long-standing mysteries, such as the unexplained nature of dark matter and dark energy. Among the actively discussed topics were the breakthrough discovery in February 2016 of gravitational waves by the LIGO observatory, which has opened up exciting opportunities for testing GR in detail (CERN Courier January/February 2017 p34), and the growing interest in gravity tests among the CERN physics community – specifically regarding attempting to measure the gravitational force on antihydrogen with three experiments at CERN’s Antiproton Decelerator (CERN Courier January/February 2017 p39). Among other highlights there were fascinating talks from pioneers in their fields, including cosmologist Misao Sasaki, one of the fathers of inflationary theory; Eric Adelberger, a leader in gravity tests at short distances; and Frans Pretorius, who created the first successful computer simulations of black-hole collisions. This is an exciting time for the field of gravity research. The LIGO–Virgo collaboration is expected to detect many more gravitational-wave events from binary black holes and neutron stars. Meanwhile, a new generation of cosmological probes currently under development, such as Euclid, LSST and SKA, are stimulating theoretical research in their respective domains (CERN Courier May 2017 p19). We are already looking forward to the next Testing Gravity in Vancouver in 2019. On 12 April, CERN hosted the seven-member high-level group of scientific advisers to the European Commission, which provides independent scientific advice on specific policy issues. Led by former CERN Director-General Rolf Heuer, the group toured ATLAS and the AMS Payload Operations Control Centre. On 18 April, Czech minister of health Miloslav Ludvik visited CERN, during which he toured the ALICE experiment and signed the guestbook with head of Member State relations Pippa Wells. Minister for higher education and science in Denmark Søren Pind visited CERN on 25 April, touring the synchrocyclotron, the Antiproton Decelerator, ALICE and ATLAS. Here he is pictured (centre) meeting ATLAS spokesperson Karl Jakobs. Dr Viktoras Pranckietis MP and speaker of the Seimas, Republic of Lithuania, visited CERN on 26 April, taking in CMS, ISOLDE and MEDICIS. He signed the guestbook with senior adviser for Lithuania Tadeusz Kurtyka (left) and director for finance and human resources Martin Steinacher.


News Article | April 17, 2017
Site: www.scientificamerican.com

The brain processes sights, sounds and other sensory information—and even makes decisions—based on a calculation of probabilities. At least, that’s what a number of leading theories of mental processing tell us: The body’s master controller builds an internal model from past experiences, and then predicts how best to behave. Although studies have shown humans and other animals make varied behavioral choices even when performing the same task in an identical environment, these hypotheses often attribute such fluctuations to “noise”—to an error in the system. But not everyone agrees this provides the complete picture. After all, sometimes it really does pay off for randomness to enter the equation. A prey animal has a higher chance of escaping predators if its behavior cannot be anticipated easily, something made possible by introducing greater variability into its decision-making. Or in less stable conditions, when prior experience can no longer provide an accurate gauge for how to act, this kind of complex behavior allows the animal to explore more diverse options, improving its odds of finding the optimal solution. One 2014 study found rats resorted to random behavior when they realized nonrandom behavior was insufficient for outsmarting a computer algorithm. Perhaps, then, this variance cannot simply be chalked up to mere noise. Instead, it plays an essential role in how the brain functions. Now, in a study published April 12 in PLoS Computational Biology, a group of researchers in the Algorithmic Nature Group at LABORES Scientific Research Lab for the Natural and Digital Sciences in Paris hope to illuminate how this complexity unfolds in humans. “When the rats tried to behave randomly [in 2014],” says Hector Zenil, a computer scientist who is one of the study’s authors, “researchers saw that they were computing how to behave randomly. This computation is what we wanted to capture in our study.” Zenil’s team found that, on average, people’s ability to behave randomly peaks at age 25, then slowly declines until age 60, when it starts to decrease much more rapidly. To test this, the researchers had more than 3,400 participants, aged four to 91, complete a series of tasks—“a sort of reversed Turing test,” Zenil says, determining how well a human can outcompete a computer when it comes to producing and recognizing random patterns. The subjects had to create sequences of coin tosses and die rolls they believed would look random to another person, guess which card would be drawn from a randomly shuffled deck, point to circles on a screen and color in a grid to form a seemingly random design. The team then analyzed these responses to quantify their level of randomness by determining the probability that a computer algorithm could generate the same decisions, measuring algorithmic complexity as the length of the shortest possible computer program that could model the participants’ choices. In other words, the more random a person’s behavior, the more difficult it would be to describe his or her responses mathematically, and the longer the algorithm would be. If a sequence were truly random, it would not be possible for such a program to compress the data at all—it would be the same length as the original sequence. After controlling for factors such as language, sex and education, the researchers concluded age was the only characteristic that affected how randomly someone behaved. “At age 25, people can outsmart computers at generating this kind of randomness,” Zenil says. This developmental  trajectory, he adds, reflects what scientists would expect measures of higher cognitive abilities to look like. In fact, a sense of complexity and randomness is based on cognitive functions including attention, inhibition and working memory (which were involved in the study’s five tasks)—although the exact mechanisms behind this relationship remain unknown. “It is around 25, then, that minds are the sharpest.” This makes biological sense, according to Zenil: Natural selection would favor a greater capacity for generating randomness during key reproductive years. The study’s results may even have implications for understanding human creativity. After all, a large part of being creative is the ability to develop new approaches and test different outcomes. “That means accessing a larger repository of diversity,” Zenil says, “which is essentially randomness. So at 25, people have more resources to behave creatively.” Zenil’s findings support previous research, which also showed a decline in random behavior with age. But this is the first study to employ an algorithmic approach to measuring complexity as well as the first to do so over a continuous age range. “Earlier studies considered groups of young and older adults, capturing specific statistical aspects such as repetition rate in very long response sequences,” says Gordana Dodig-Crnkovic, a computer scientist at Mälardalen University in Sweden, who was not involved in the research. “The present article goes a step further.” Using algorithmic measures of randomness, rather than statistical ones, allowed Zenil’s team to examine true random behavior instead of statistical, or pseudorandom, behavior—which, although satisfying statistical tests for randomness, would not necessarily be “incompressible” the way truly random data is. The fact that algorithmic capability differed with age implies the brain is algorithmic in nature—that it does not assume the world is statistically random but takes a more generalized approach without the biases described in more traditional statistical models of the brain. These results may open up a wider perspective on how the brain works: as an algorithmic probability estimator. The theory would update and eliminate some of the biases in statistical models of decision-making that lie at the heart of prevalent theories—prominent among them is the Bayesian brain hypothesis, which holds that the mind assigns a probability to a conjecture and revises it when new information is received from the senses.  “The brain is highly algorithmic,” Zenil says. “It doesn’t behave stochastically, or as a sort of coin-tossing mechanism.” Neglecting an algorithmic approach in favor of only statistical ones gives us an incomplete understanding of the brain, he adds. For instance, a statistical approach does not explain why we can remember sequences of digits such as a phone number—take “246-810-1214,” whose digits are simply even counting numbers: This is not a statistical property, but an algorithmic one. We can recognize the pattern and use it to memorize the number. Algorithmic probability, moreover, allows us to more easily find (and compress) patterns in information that appears random. “This is a paradigm shift,” Zenil says, “because even though most researchers agree that there is this algorithmic component in the way the mind works, we had been unable to measure it because we did not have the right tools, which we have now developed and introduced in our study.” Zenil and his team plan to continue exploring human algorithmic complexity, and hope to shed light on the cognitive mechanisms underlying the relationship between behavioral randomness and age. First, however, they plan to conduct their experiments with people who have been diagnosed with neurodegenerative diseases and mental disorders, including Alzheimer’s and schizophrenia. Zenil predicts, for example, that participants diagnosed with the latter will not generate or perceive randomness as well as their counterparts in the control group, because they often make more associations and observe more patterns than the average person does. The researchers’ colleagues are standing by. Their work on complexity, says Dodig-Crnkovic, “presents a very promising approach.”


News Article | April 21, 2017
Site: www.eurekalert.org

New Apress book explains technical foundations of the Ethereum project -- view to new products and services Cryptocurrencies are on the rise, and blockchain protocols are taking the world by storm. Ethereum is an open-source public blockchain featuring smart contracts and which uses the Turing-complete scripting language Solidity. The open source Ethereum protocol was first proposed in 2013, along with its native cryptocurrency ether. Since then ether has grown to become the second largest cryptocurrency by market capitalization after bitcoin. Introducing Ethereum and Solidity, written by Chris Dannen and published by Apress, compiles the basic technical principles underlying Ethereum and situates the project within the existing world of hardware and software. The book familiarizes readers with blockchain programming paradigms, and introduces the programming language Solidity. With this book as their guide, readers will be able to learn the foundations of smart contract programming and distributed application development. The author starts out by reviewing the fundamentals of programming and networking, and describing how blockchains can solve long-standing technology challenges. The book also outlines the new discipline of crypto-economics, the study of game theoretical systems written in pure software. Readers are then guided into deploying smart contracts of their own, and learning how those can serve as a back-end for JavaScript and HTML applications on the web. "Unlike other tutorials, Introducing Ethereum and Solidity is written for both technology professionals, financial services professionals, and enthusiasts of all levels. It provides creative technologists with a gateway from concept to deployment," says the author. Chris Dannen graduated from the University of Virginia. He is founder and partner at Iterative Instinct, a hybrid investment fund focused on cryptocurrency trading and seed-stage venture investments. He worked previously as a business journalist and corporate strategist. A self-taught programmer, he holds one computer hardware patent. Chris Dannen was formerly a Senior Editor at Fast Company and today consults on technical content for major publishers. Chris Dannen Introducing Ethereum and Solidity Foundations of Cryptocurrency and Blockchain Programming for Beginners 2017, 206 p. 34 illus. 31 illus. in colour Softcover € 36.99 (D) | £ 27.99 | $ 39.99 ISBN 978-1-4842-2534-9 Also available as an eBook


LAS VEGAS--(BUSINESS WIRE)--SubscriberWise, the nation’s largest issuing CRA for the communications industry and the leading advocate for children victimized by identity fraud, announced today the appearance by company founder and CEO David Howe at the 2017 United States Bowling Congress. The preeminent bowling event widely is recognized as the largest participatory sporting event in the world. This year the tournament event is held at the $30 million South Point Bowling Plaza, located inside the South Point Hotel, Casino & Spa in beautiful Las Vegas, NV. "Last week I had the privilege to witness one of the most prestigious bowling events in this nation," said David Howe, SubscriberWise founder, national child protector, and FICO mega-star global MVP all-time worldwide highest achieving supreme master champion. "The 2017 USBC Open Championship was truly an unforgettable event – with the excitement and energy so dynamic neither could be contained. “Watching the pros striking down the pins with style and precision, I could hardly believe that I was part of a squad as a small-town-novice-bowler more than 30 years ago,” Howe recalled. “Although I was far from a stand-out on the lanes, it never mattered because I always had fun and I was part of the team. “But one thing is remarkably evident today,” Howe added. “I don't know how I ever lifted those heavy balls. I also don’t know how I rolled them so hard and fast down the alley. “Nevertheless, this event will not be marked by the thunder of the balls or the strike of the pins,” Howe continued. “It will not be marked by the accolades, awards, or the 300-pin-perfect-rolls. And it will certainly not be marked by a visit from the FICO GOAT (https://youtu.be/uxYIFMlkzFM). “Rather, this event will be marked by the friendship, camaraderie, sportsmanship - and most of all - the respect for all -- which were on full display during the 2017 USBC Open Championship in beloved Las Vegas, NV. “In addition to this amazing SubscriberWise sponsorship honor, the other special privilege to mark the occasion was the opportunity to cheer my cousin -- Joseph S. Paul -- also a team member and a 21-year-USBC-bowling-championship-participant. “Congratulations, Joe! The 2018 SubscriberWise® sponsorship check is in the mail. See you on the lanes next year. “Yes, the event was an incredible success,” concluded Howe. “Meeting each of the team members individually, including several of their family and significant others, was a particular highlight. Of course, the same must be said about the other bowlers that Credit Czar met – including their lovers, family, and friends – all who made the experience an unforgettable and most joyful event.” About SubscriberWise and U.S. Credit Czar David Howe SubscriberWise® launched as the first issuing consumer reporting agency exclusively for the cable industry in 2006. The company filed extensive documentation and end-user agreements to access TransUnion’s consumer database. In 2009, SubscriberWise and TransUnion announced a joint marketing agreement for the benefit of America’s cable operators. Today SubscriberWise is a risk management preferred-solutions provider for the National Cable Television Cooperative. SubscriberWise was founded by David Howe, who is a consultant and credit manager for MCTV, where he has remained employed for two decades. At MCTV, Howe manages the bad debt and equipment losses on annual sales in excess of $65 million. His interest in credit began in 1986 as a 17-year-old student in high school. Today, Howe is the highest FICO and Vantage Achiever in the worldwide financial history since Alan Turing invented the computer. SubscriberWise contributions to the communications industry are quantified in the billions of dollars annually. SubscriberWise is a U.S.A. federally registered trademark of the SubscriberWise Limited Liability Co.


News Article | April 17, 2017
Site: www.nature.com

This year marks the centenary of what seems now to be an extraordinary event in publishing: the time when a UK local newspaper reviewed a dense, nearly 800-page treatise on mathematical biology that sought to place physical constraints on the processes of Darwinism. And what’s more, the Dundee Advertiser loved the book and recommended it to readers. When the author, it noted, wrote of maths, “he never fails to translate his mathematics into English; and he is one of the relatively few men of science who can write in flawless English and who never grudge the effort to make every sentence balanced and good.” The Dundee Advertiser is still going, although it has changed identity: a decade after the review was published, it merged with The Courier, and that is how most people refer to it today. The book is still going, too. If anything, its title — alongside its balanced and good sentences — has become more iconic and recognized as the years have ticked by. The book is On Growth and Form by D’Arcy Thompson. This week, Nature offers its own appreciation, with a series of articles in print and online that celebrate the book’s impact, ideas and lasting legacy. Still in print, On Growth and Form was more than a decade in the planning. Thompson would regularly tell colleagues and students — he taught at what is now the University of Dundee, hence the local media interest — about his big idea before he wrote it all down. In part, he was reacting against one of the biggest ideas in scientific history. Thompson used his book to argue that Charles Darwin’s natural selection was not the only major influence on the origin and development of species and their unique forms: “In general no organic forms exist save such as are in conformity with physical and mathematical laws.” Biological response to physical forces remains a live topic for research. In a research paper, for example, researchers report how physical stresses generated at defects in the structures of epithelial cell layers cause excess cells to be extruded. In a separate online publication (K. Kawaguchi et al. Nature http://dx.doi.org/10.1038/nature22321; 2017), other scientists show that topological defects have a role in cell dynamics, as a result of the balance of forces. In high-density cultures of neural progenitor cells, the direction in which cells travel around defects affects whether cells become more densely packed (leading to pile-ups) or spread out (leading to a cellular fast-lane where travel speeds up). A Technology Feature investigates in depth the innovative methods developed to detect and measure forces generated by cells and proteins. Such techniques help researchers to understand how force is translated into biological function. Thompson’s influence also flourishes in other active areas of interdisciplinary research. A research paper offers a mathematical explanation for the colour changes that appear in the scales of ocellated lizards (Timon lepidus) during development (also featured on this week’s cover). It suggests that the patterns are generated by a system called a hexagonal cellular automaton, and that such a discrete system can emerge from the continuous reaction-diffusion framework developed by mathematician Alan Turing to explain the distinctive patterning on animals, such as spots and stripes. (Some of the research findings are explored in detail in the News and Views section.) To complete the link to Thompson, Turing cited On Growth and Form in his original work on reaction-diffusion theory in living systems. Finally, we have also prepared an online collection of research and comment from Nature and the Nature research journals in support of the centenary, some of which we have made freely available to view for one month. Nature is far from the only organization to recognize the centenary of Thompson’s book. A full programme of events will run this year around the world, and at the D’Arcy Thompson Zoology Museum in Dundee, skulls and other specimens are being scanned to create digital 3D models. Late last month, this work was featured in The Courier. One hundred years on, Thompson’s story has some way to run yet.


VANCOUVER, BC / ACCESSWIRE / April 26, 2017 / CoinQx Exchange LIMITED, a wholly owned subsidiary of FIRST BITCOIN CAPITAL CORP (OTC PINK: BITCF or "Company", "We", "Us" or "Our") is the world's first underwriter of Initial Coin Offerings. The emerging proliferation of altcoins is rife with unverifiable ICOs which some say could result in the next South Seas Company Bubble not unlike that of the 16 th Century that resulted in Great Britain enacting the Bubble Act. What makes cryptocurrencies unlike that bubble is that cryptocurrencies have a new type of value and usage unknown to the 1700s. Comparisons were early made of Bitcoin to the Tulip frenzy of 15th Century Holland which turned out to be unfounded as well. Undoubtedly there will be many bubbles popping in most altcoins along the path to weeding out the fly-by-nights and perhaps a final bubble like that of the Internet when the stock market boom peaked on March 10 2000. This has presented First Bitcoin Capital with a unique opportunity to act as an underwriter of ICOs which will make us also a gatekeeper to weed out undesirable coin offerings. We envision that by acting as underwriter it will help differentiate the fly-by-night operators so that many potential speculators will avoid those that do not pass through our due diligence gateway. We believe that we are uniquely suited for this endeavor in as much as being a transparent public company; filing reports with OTC Markets; the trading of our shares regulated by FINRA; our CoinQx exchange registered with FINCEN; experienced in peer-to-peer cryptocurrency creation; having launched our own ICO; as well as the first pubco dedicated to this space. We are currently negotiating several underwritings including "Bonanza" now being offered as symbol "XZA" which has undergone our due diligence process sufficient to have agreed to act as best efforts underwriter at this point. As another way to ferret out the less credible coin offerings that are already trading on cryptocurrency exchanges our www.altcoinmarket.com web site (under development) will give the altcoin community access to up and down vote all altcoins covered therein. There also lacks conformity in this new space, therefore, First Bitcoin Capital Corp is in the process of developing systems like FINRA and CUSIP developed for identifiers including numbering and symbols generation yet to be placed on an open source blockchain. Another aspect of peer to peer coins that makes it unlike the South Seas Company Bubble is the emergence of Decentralized Autonomous States and Organizations (DAS or DAO) brining the world into a previously unknown legal structure opening new horizons. What makes this brave new world like the South Seas Bubble is the rampant speculation in new ICOs that may also similarly be seen as gambling. A decentralized autonomous organization (DAO), sometimes labeled a decentralized autonomous corporation (DAC), is an organization that is run through rules encoded as computer programs called smart contracts.A DAO's financial transaction record and program rules are maintained on a blockchain. There are several examples of this business model. The precise legal status of this type of business organization is unclear. The best-known example was The DAO, a DAO for venture capital funding, which was launched with $150 million in crowdfunding in June 2016 and was immediately hacked and drained of US$50 million in cryptocurrency. Decentralized autonomous organizations have been seen by some as difficult to describe. Nevertheless, the conceptual essence of a decentralized autonomous organization has been typified as the ability of blockchain technology to provide a secure digital ledger that tracks financial interactions across the internet, hardened against forgery by trusted timestamping and by dissemination of a distributed database. This approach eliminates the need to involve a bilaterally accepted trusted third party in a financial transaction, thus simplifying the sequence. [2]The costs of a blockchain enabled transaction and of making available the associated data may be substantially lessened by the elimination of both the trusted third party and of the need for repetitious recording of contract exchanges in different records: for example, the blockchain data could in principle, if regulatory structures permitted, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration. Buterin proposed that after a DAO was launched, it might be organized to run without human managerial interactivity, provided the smart contracts were supported by a Turing complete platform. Ethereum, built on a blockchain and launched in 2015, has been described as meeting that Turing threshold, thus enabling DAOs. Decentralized autonomous organizations aim to be open platforms where individuals control their identities and their personal data. Examples of DAOs are Dash, The DAO and Digix.io.[12] A value token called DigixDAO finished its crowdfunding campaign and the token began trading on exchanges on 28 April 2016. Shareholder participation in DAOs can be problematic. For example, BitShares has seen a lack of voting participation, because it takes time and energy to consider proposals. The precise legal status of this type of business organization is unclear;some similar approaches have been regarded by the U.S. Securities and Exchange Commission as illegal offers of unregistered securities.Although unclear, a DAO may functionally be a corporation without legal status as a corporation: a general partnership.This means potentially unlimited legal liability for participants, even if the smart contract code or the DAO's promoters say otherwise.Known participants, or those at the interface between a DAO and regulated financial systems, may be targets for regulatory enforcement or civil actions. The code of a given DAO will be difficult to alter once the system is up and running, including bug fixes that would be trivial in centralised code. Corrections for a DAO would require writing new code and agreement to migrate all the funds. Although the code is visible to all, it is hard to repair, thus leaving known security holes open to exploitation unless a moratorium is called to enable bug fixing. In 2016, a specific DAO, The DAO, set a record for the largest crowdfunding campaign to date. However, researchers pointed out multiple issues in the code of The DAO. The operational procedure for The DAO allows investors to withdraw at will any money that has not yet been committed to a project; the funds could thus deplete quickly. Although safeguards aim to prevent gaming the voting of shareholders to win investments, there were a "number of security vulnerabilities". These enabled an attempted large withdrawal of funds from The DAO that was initiated in mid-June 2016. However, after much debate, on the 20th July 2016, the Ethereum community arrived at a consensus decision to hard fork the Ethereum blockchain to bailout the original contract and thus $ETC was a result. First Bitcoin Capital Corp is planning to launch a series of DAOs in 2017. The Company continues to actively offer AltCoin (ALT), its first ICO as it approaches completion via http://www.altcoinmarketcap.com We are proud to report that ALT is now listed on 4 exchanges including CCEX.com, Cryptopia, OMNIDEX, and our own CoinQX.com. Altcoin (symbol ALT) was added to our competitor's website today, coinmarketcap.com. We have also listed the recently launched 6 (see list below) indicative Bitcoin hard fork outcomes for speculators to predict the successful one(s) on www.CoinQX.com as well as adding "Bond." First Bitcoin Capital is engaged in developing digital currencies, proprietary Blockchain technologies, and the digital currency exchange- www.CoinQX.com. We see this step as a tremendous opportunity to create further shareholder value by leveraging management's experience in developing and managing complex Blockchain technologies, developing new types of digital assets. Being the first publicly-traded cryptocurrency and blockchain-centered company (with shares both traded in the US OTC Markets as [BITCF] and as [BIT] in crypto exchanges) we want to provide our shareholders with diversified exposure to digital cryptocurrencies and blockchain technologies. At this time the Company owns and operates more than the following digital assets. List of Omni protocol coins issued on the Bitcoin Blockchain owned by the Company: http://omnichest.info/lookupadd.aspx?address=1FwADyEvdvaLNxjN1v3q6tNJCgHEBuABrS Certain statements contained in this press release may constitute "forward-looking statements." Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors as may be disclosed in company's filings. In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic conditions, and governmental and public policy changes. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of the press release .Such forward-looking statements are risks that are detailed in the Company's filings, which are on file at www.OTCMarkets.com. Contact us via: info@bitcoincapitalcorp.com or visit http://www.bitcoincapitalcorp.com VANCOUVER, BC / ACCESSWIRE / April 26, 2017 / CoinQx Exchange LIMITED, a wholly owned subsidiary of FIRST BITCOIN CAPITAL CORP (OTC PINK: BITCF or "Company", "We", "Us" or "Our") is the world's first underwriter of Initial Coin Offerings. The emerging proliferation of altcoins is rife with unverifiable ICOs which some say could result in the next South Seas Company Bubble not unlike that of the 16 th Century that resulted in Great Britain enacting the Bubble Act. What makes cryptocurrencies unlike that bubble is that cryptocurrencies have a new type of value and usage unknown to the 1700s. Comparisons were early made of Bitcoin to the Tulip frenzy of 15th Century Holland which turned out to be unfounded as well. Undoubtedly there will be many bubbles popping in most altcoins along the path to weeding out the fly-by-nights and perhaps a final bubble like that of the Internet when the stock market boom peaked on March 10 2000. This has presented First Bitcoin Capital with a unique opportunity to act as an underwriter of ICOs which will make us also a gatekeeper to weed out undesirable coin offerings. We envision that by acting as underwriter it will help differentiate the fly-by-night operators so that many potential speculators will avoid those that do not pass through our due diligence gateway. We believe that we are uniquely suited for this endeavor in as much as being a transparent public company; filing reports with OTC Markets; the trading of our shares regulated by FINRA; our CoinQx exchange registered with FINCEN; experienced in peer-to-peer cryptocurrency creation; having launched our own ICO; as well as the first pubco dedicated to this space. We are currently negotiating several underwritings including "Bonanza" now being offered as symbol "XZA" which has undergone our due diligence process sufficient to have agreed to act as best efforts underwriter at this point. As another way to ferret out the less credible coin offerings that are already trading on cryptocurrency exchanges our www.altcoinmarket.com web site (under development) will give the altcoin community access to up and down vote all altcoins covered therein. There also lacks conformity in this new space, therefore, First Bitcoin Capital Corp is in the process of developing systems like FINRA and CUSIP developed for identifiers including numbering and symbols generation yet to be placed on an open source blockchain. Another aspect of peer to peer coins that makes it unlike the South Seas Company Bubble is the emergence of Decentralized Autonomous States and Organizations (DAS or DAO) brining the world into a previously unknown legal structure opening new horizons. What makes this brave new world like the South Seas Bubble is the rampant speculation in new ICOs that may also similarly be seen as gambling. A decentralized autonomous organization (DAO), sometimes labeled a decentralized autonomous corporation (DAC), is an organization that is run through rules encoded as computer programs called smart contracts.A DAO's financial transaction record and program rules are maintained on a blockchain. There are several examples of this business model. The precise legal status of this type of business organization is unclear. The best-known example was The DAO, a DAO for venture capital funding, which was launched with $150 million in crowdfunding in June 2016 and was immediately hacked and drained of US$50 million in cryptocurrency. Decentralized autonomous organizations have been seen by some as difficult to describe. Nevertheless, the conceptual essence of a decentralized autonomous organization has been typified as the ability of blockchain technology to provide a secure digital ledger that tracks financial interactions across the internet, hardened against forgery by trusted timestamping and by dissemination of a distributed database. This approach eliminates the need to involve a bilaterally accepted trusted third party in a financial transaction, thus simplifying the sequence. [2]The costs of a blockchain enabled transaction and of making available the associated data may be substantially lessened by the elimination of both the trusted third party and of the need for repetitious recording of contract exchanges in different records: for example, the blockchain data could in principle, if regulatory structures permitted, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration. Buterin proposed that after a DAO was launched, it might be organized to run without human managerial interactivity, provided the smart contracts were supported by a Turing complete platform. Ethereum, built on a blockchain and launched in 2015, has been described as meeting that Turing threshold, thus enabling DAOs. Decentralized autonomous organizations aim to be open platforms where individuals control their identities and their personal data. Examples of DAOs are Dash, The DAO and Digix.io.[12] A value token called DigixDAO finished its crowdfunding campaign and the token began trading on exchanges on 28 April 2016. Shareholder participation in DAOs can be problematic. For example, BitShares has seen a lack of voting participation, because it takes time and energy to consider proposals. The precise legal status of this type of business organization is unclear;some similar approaches have been regarded by the U.S. Securities and Exchange Commission as illegal offers of unregistered securities.Although unclear, a DAO may functionally be a corporation without legal status as a corporation: a general partnership.This means potentially unlimited legal liability for participants, even if the smart contract code or the DAO's promoters say otherwise.Known participants, or those at the interface between a DAO and regulated financial systems, may be targets for regulatory enforcement or civil actions. The code of a given DAO will be difficult to alter once the system is up and running, including bug fixes that would be trivial in centralised code. Corrections for a DAO would require writing new code and agreement to migrate all the funds. Although the code is visible to all, it is hard to repair, thus leaving known security holes open to exploitation unless a moratorium is called to enable bug fixing. In 2016, a specific DAO, The DAO, set a record for the largest crowdfunding campaign to date. However, researchers pointed out multiple issues in the code of The DAO. The operational procedure for The DAO allows investors to withdraw at will any money that has not yet been committed to a project; the funds could thus deplete quickly. Although safeguards aim to prevent gaming the voting of shareholders to win investments, there were a "number of security vulnerabilities". These enabled an attempted large withdrawal of funds from The DAO that was initiated in mid-June 2016. However, after much debate, on the 20th July 2016, the Ethereum community arrived at a consensus decision to hard fork the Ethereum blockchain to bailout the original contract and thus $ETC was a result. First Bitcoin Capital Corp is planning to launch a series of DAOs in 2017. The Company continues to actively offer AltCoin (ALT), its first ICO as it approaches completion via http://www.altcoinmarketcap.com We are proud to report that ALT is now listed on 4 exchanges including CCEX.com, Cryptopia, OMNIDEX, and our own CoinQX.com. Altcoin (symbol ALT) was added to our competitor's website today, coinmarketcap.com. We have also listed the recently launched 6 (see list below) indicative Bitcoin hard fork outcomes for speculators to predict the successful one(s) on www.CoinQX.com as well as adding "Bond." First Bitcoin Capital is engaged in developing digital currencies, proprietary Blockchain technologies, and the digital currency exchange- www.CoinQX.com. We see this step as a tremendous opportunity to create further shareholder value by leveraging management's experience in developing and managing complex Blockchain technologies, developing new types of digital assets. Being the first publicly-traded cryptocurrency and blockchain-centered company (with shares both traded in the US OTC Markets as [BITCF] and as [BIT] in crypto exchanges) we want to provide our shareholders with diversified exposure to digital cryptocurrencies and blockchain technologies. At this time the Company owns and operates more than the following digital assets. List of Omni protocol coins issued on the Bitcoin Blockchain owned by the Company: http://omnichest.info/lookupadd.aspx?address=1FwADyEvdvaLNxjN1v3q6tNJCgHEBuABrS Certain statements contained in this press release may constitute "forward-looking statements." Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors as may be disclosed in company's filings. In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic conditions, and governmental and public policy changes. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of the press release .Such forward-looking statements are risks that are detailed in the Company's filings, which are on file at www.OTCMarkets.com. Contact us via: info@bitcoincapitalcorp.com or visit http://www.bitcoincapitalcorp.com

Loading Turing Inc. collaborators
Loading Turing Inc. collaborators