Clonmel RFC won’t fear Dundalk this afternoon

first_imgWith both sides having won their opening matches, the winners of today’s decider will become the favourites to win the Round Robin series and progress to the play-off. Dundalk beat Instonians in their opening game by 15 points to 11 at Shaw’s Bridge. Clonmel team manager Joe Winston says their opponents are the favourites heading into today’s game, but his side are confident in their abilities. That game today kicks-off at 2.30pm.last_img

BoyleSports signs Coventry City sponsorship

first_img BoyleSports closes Irish retail estate in response to COVID-19 March 17, 2020 Submit StumbleUpon Share BoyleSports upgrades retail displays with Scientific Games ‘Promote’ February 28, 2020 Countdown to Cheltenham: Wednesday racecard ‘not a worry at the moment’ says BoyleSports March 6, 2020 Share Related Articles BoyleSports has grown its football sponsorship portfolio after being named the ‘principal partner’ for Coventry City Football Club.This story featured in today’s SBC News 90. To view the latest round-up, watch today’s edition here.The agreement, which covers the 2020/21 season, will see the BoyleSports logo adorn the front of the playing kit and replica shirts next season.Conor Gray, CEO for BoyleSports, has welcomed the new deal. He said: “Having chosen the Midlands to be the base for our expansion into Britain, we are proud to cement our commitment to both Coventry and to the wider Midlands area with this deal.“We see exciting times ahead for BoyleSports in the Midlands, and with their recent promotion to the Championship, we wish Coventry City FC all the very best in their upcoming season.”BoyleSports began its expansion into the UK retail market last year after acquiring 13 stores in the Midlands – three of which were in Coventry. Since then, the bookmaker has acquired a further six shops in the area.The partnership with the Irish bookmaker is Coventry’s highest value front of shirt sponsorship deal since the 2011/12 campaign and will provide ‘vital revenue’ to the club and the team.Coventry City Chief Executive Dave Boddy added: “We’re very pleased to welcome BoyleSports to Coventry City as our new shirt sponsor and principal club partner.“This is a significant deal for the Football Club and will provide much needed and significant revenue for the Club, helping to support Mark Robins and the team in preparation for the upcoming Championship season.“BoyleSports have already shown great enthusiasm about their new sponsorship and supporting the Sky Blues, and we look forward to working with them. In line with EFL guidance, the Club and BoyleSports will work closely to ensure that the promotion of any gambling products is conducted responsibly and in line with all appropriate licensing and regulatory guidelines.”👋 Hey @Coventry_City fans! We want to get to know everyone associated with the club…So we thought we’d kick things off by speaking to strike ace @MattyGodden24! 🙌Our Head of Comms @BoyleSportsLB digs deep to find out about his Sky Blues team mates 👇#PUSB— BoyleSports (@BoyleSports) August 20, 2020last_img read more

Researchers legislators and Big Tech are trying to combat the growing threat

first_img by Disney Institute How Disney Uses Spontaneity to Make Customers Feel Like… Sponsored Content What Is a Deepfake?And how to fight back if you’re a victim.ShareVideo Player is loading.Play VideoPlayMuteCurrent Time 0:00/Duration 4:23Loaded: 15.07%0:00Stream Type LIVESeek to live, currently playing liveLIVERemaining Time -4:23 Playback Rate1xChaptersChaptersDescriptionsdescriptions off, selectedCaptionscaptions settings, opens captions settings dialogcaptions off, selectedEnglishAudio Tracken (Main), selectedFullscreenThis is a modal window.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window.Close Modal DialogThis is a modal window. This modal can be closed by pressing the Escape key or activating the close button.Close Modal DialogThis is a modal window. This modal can be closed by pressing the Escape key or activating the close button.PauseMuteCurrent Time 0:03/Duration 0:15Loaded: 0%0:03Stream Type LIVESeek to live, currently playing liveLIVERemaining Time -0:12 Playback Rate1xFullscreenLike a zombie horde, they keep coming. First, there were the pixelated likenesses of actresses Gal Gadot and Scarlett Johansson brushstroked into dodgy user-generated adult films. Then a disembodied digital Barack Obama and Donald Trump appeared in clips they never agreed to, saying things the real Obama and Trump never said. And in June, a machine-learning-generated version of Facebook CEO Mark Zuckerberg making scary comments about privacy went viral. Welcome to the age of deepfakes, an emerging threat powered by artificial intelligence that puts words in the mouths of people in video or audio clips, conjures convincing headshots from a sea of selfies, and even puts individuals in places they’ve never been, interacting with people they’ve never met. Before long, it’s feared, the ranks of deepfake deceptions will include politicians behaving badly, news anchors delivering fallacious reports, and impostor executives trying to bluff their way past employees so they can commit fraud. So far, women have been the biggest victims of deepfakes. In late June, the app Deepnudes shut down amid controversy after journalists disclosed that users could feed the app ordinary photos of women and have it spit out naked images of them. There’s concern the fallout from the technology will go beyond the creepy, especially if it falls into the hands of rogue actors looking to disrupt elections and tank the shares of public companies. The tension is boiling over. Lawmakers want to ban deepfakes. Big Tech believes its engineers will develop a fix. Meanwhile, the researchers, academics, and digital rights activists on the front lines bemoan that they’re ill equipped to fight this battle. Sam Gregory, program director at the New York City–based human rights organization Witness, points out that it’s far easier to create a deepfake than it is to spot one. Soon, you won’t even need to be a techie to make a deepfake. Photos, Zuckerberg: Courtesy of Faceboook; Obama: Neilson Barnard—Getty Images; Trump: Saul Loeb—AFP/Getty Images; Pelosi: Chip Somodevilla—Getty Images; Gadot & Johansson: Mike Coppola—Getty Images; Wireframes: Lidiia Moor—Getty imagesWitness has been training media companies and activists in how to identify A.I.-generated “synthetic media,” such as deepfakes and facial reenactments—the recording and transferring of facial expressions from one person to another—that could undermine trust in their work. He and others have begun to call on tech companies to do more to police these fabrications. “As companies release products that enable creation, they should release products that enable detection as well,” says Gregory.Software maker Adobe Systems has found itself on both sides of this debate. In June, computer scientists at Adobe Research demonstrated a powerful text-to-speech machine-learning algorithm that can literally put words in the mouth of a person on film. A company spokesperson notes that Adobe researchers are also working to help unmask fakes. For example, Adobe recently released research that could help detect images manipulated by Photo­shop, its popular image-editing software. But as researchers and digital rights activists note, the open-source community, made up of amateur and independent programmers, is far more organized around making deepfakes persuasive and thus harder to spot. For now, bad actors have the advantage.This is one reason that lawmakers are stepping into the fray. The House Intelligence Committee convened a hearing in June about the national security challenges of artificial intelligence, manipulated media, and deepfakes. The same day, Rep. Yvette Clarke (D-N.Y.) introduced the DEEPFAKES Accountability Act, the first attempt by Congress to criminalize synthetic media used to deceive, defraud, or destabilize the public. State lawmakers in Virginia, Texas, and New York, meanwhile, have introduced or enacted their own legislation in what’s expected to be a torrent of laws aimed at outmaneuvering the fakes. Jack Clark, policy director at OpenAI, an A.I. think tank, testified on Capitol Hill in June about the deepfakes problem. He tells Fortune that it’s time “industry, academia, and government worked together” to find a solution. The public and private sectors, Clark notes, have joined forces in the past on developing standards for cellular networks and for regulating public utilities. “I expect A.I. is important enough we’ll need similar things here,” he says. In an effort to avoid such government intervention, tech companies are trying to show that they can handle the problem without clamping down too hard on free speech. YouTube has removed a number of deepfakes from its service after users flagged them. And recently, Facebook’s Zuckerberg said that he’s considering a new policy for policing deepfakes on his site, enforced by a mix of human moderators and automation. The underlying technology behind most deepfakes and A.I.-powered synthetic media is the generative adversarial network, or GAN, invented in 2014 by the Montreal-based Ph.D. student Ian Goodfellow, who later worked at Google before joining Apple this year.Until his invention, machine-learning algorithms had been relatively good at recognizing images from vast quantities of training data—but that’s about all. With the help of newer technology, like more powerful computer chips, GANs have become a game changer. They enable algorithms to not just classify but also create pictures. Show a GAN an image of a person standing in profile, and it can produce entirely manufactured images of that person—from the front or the back. Researchers immediately heralded the GAN as a way for computers to fill in the gaps in our understanding of everything around us, to map, say, parts of distant galaxies that telescopes can’t penetrate. Other programmers saw it as a way to make super-convincing celebrity porn videos. In late 2017, a Reddit user named “Deepfakes” did just that, uploading to the site adult videos featuring the uncanny likenesses of famous Hollywood actresses. The deepfake phenomenon exploded from there.Soon after, Giorgio Patrini, a machine-learning Ph.D. who became fascinated—and then concerned—with how GAN models were being exploited, left the research lab and cofounded Deeptrace Labs, a Dutch startup that says it’s building “the antivirus for deepfakes.” Clients include media companies that want to give reporters tools to spot manipulations of their work or to vet the authenticity of user-­generated video clips. Patrini says that in recent months, corporate brand-reputation managers have contacted his firm, as have network security specialists. “There’s particular concern about deepfakes and the potential for it to be used in fraud and social engineering attempts,” says Patrini.Malwarebytes Labs of Santa Clara, Calif., recently warned of something similar, saying in a June report on A.I.-powered threats that “deepfakes could be used in incredibly convincing spear-phishing attacks that users would be hard-pressed to identify as false.” The report continues, “Imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse.” In the world of deepfakes, you don’t need to be famous to be cast in a leading role.Correction: The original version of this article incorrectly characterized technology produced by Adobe that detects images manipulated by Photoshop. What Adobe unveiled is early-stage research to do that, not a commercial product, or “tool.”This article originally appeared in the August 2019 issue of Fortune.More must-read stories from Fortune:—The 2019 Fortune Global 500: See the full list—It’s China’s world: China has now reached parity with the U.S. on the Global 500—China’s biggest private sector company is betting its future on data—How the maker of the world’s bestselling drug keeps prices sky-high—Cloud gaming is big tech’s new street fightGet up to speed on your morning commute with Fortune’s CEO Daily newsletter.You May Like HealthFormer GE CEO Jeff Immelt: To Combat Costs, CEOs Should Run Health Care Like a BusinessHealthFor Edie Falco, an ‘Attitude of Gratitude’ After Surviving Breast CancerLeadershipGhosn Back, Tesla Drop, Boeing Report: CEO Daily for April 4, 2019AutosElon Musk’s Plan to Boost Tesla Sales Is Dealt a SetbackMPWJoe Biden, Netflix Pregnancy Lawsuit, Lesley McSpadden: Broadsheet April 4last_img read more