hig.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard-cite-them-right
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • sv-SE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • de-DE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Robustness of a neural network used for image classification: The effect of applying distortions on adversarial examples
Högskolan i Gävle, Akademin för teknik och miljö, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, Datavetenskap.
2018 (Engelska)Självständigt arbete på grundnivå (yrkesexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
Abstract [en]

Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera.Camera related distortions, and how they affect the accuracy, have previously been explored. Recently, it has been proven that images can be intentionally made harder to recognise, an effect that last even after they have been photographed.Such images are known as adversarial examples.The purpose of this thesis is to evaluate how well a neural network can recognise adversarial examples which are also distorted. To evaluate the network, the adversarial examples are distorted in different ways and thereafter fed to the neural network.Different kinds of distortions (rotation, blur, contrast and skew) were used to distort the examples. For each type and strength of distortion the network's ability to classify was measured.Here, it is shown that all distortions influenced the neural network's ability to recognise images.It is concluded that the type and strength of a distortion are important factors when classifying distorted adversarial examples, but also that some distortions, rotation and skew, are able to keep their characteristic influence on the accuracy, even if they are influenced by other distortions.

Ort, förlag, år, upplaga, sidor
2018. , s. 25
Nyckelord [en]
LeNet, Distorted Images, MNIST, Adversarial Examples
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:hig:diva-26118OAI: oai:DiVA.org:hig-26118DiVA, id: diva2:1181511
Ämne / kurs
Datavetenskap
Utbildningsprogram
Högskoleingenjör
Handledare
Examinatorer
Tillgänglig från: 2018-02-09 Skapad: 2018-02-08 Senast uppdaterad: 2018-02-09Bibliografiskt granskad

Open Access i DiVA

fulltext(585 kB)339 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 585 kBChecksumma SHA-512
775e15102dae56d41b3a321199d8403635bd19adae39bd292a4dcc01f572ceccecbbefbe412525630a14573de5241e2d158ab54e6a3385ba30ffb565f9ba5bbc
Typ fulltextMimetyp application/pdf

Av organisationen
Datavetenskap
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 339 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 290 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard-cite-them-right
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • sv-SE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • de-DE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf