Building your own Self-attention GANs

栏目: IT技术 · 发布时间: 4年前

Building your own Self-attention GANs

A PyTorch implementation of SAGAN with MNIST and CelebA dataset

Meme from imgflip.com

GANs, as known as Generative Adversarial Networks , is one of the most popular topics in the Machine Learning fields recently. It consists of two different Neural Network models, one called Generator , and one called Discriminator . It sounds hard to understand, but let me try to put it in this way: Let’s say we want to forge famous paintings starting without any knowledge of painting, what should we do? Most would say, just look at the paintings and learn how to do it. But it’s not a one-man job, to some point, I am sure you will be better and better in painting. You will need to have your friend come in front of one real painting and one that you forged and let him guess which one is the real one. It will be pretty easy for him to guess in the beginning, but keep it going and you will eventually confuse your friend.

In GANs, the generator is like you who forge paintings, and the discriminator is the friend who specializes in telling which painting is fake. Think about the goal here, you want to make it hard for your friend to tell real or fake. If your friend were to give out a probability of being real from 0 to 1 for each painting, you would want him to give 0.5 to any paintings you show him, either real or forged. This will also be the objective of GANs, as reflected in loss functions.

We also see DCGAN a lot, which stands for Deep Convolutional GAN . It is a GAN design that specialized in image generation, using convolution layers for both generator and discriminator . It works just like a CNN . A Self-attention GAN is a DCGAN that utilizes self-attention layers. The idea of self-attention has been out there for years, also known as non-local in some researches. Think about how does convolution works: they convolve nearby pixels and extract features out of local blocks. They work “locally” in each layer. In contrast, self-attention layers learn from distant blocks. In 2017, Google published a paper “ Attention Is All You Need ”, bringing more hypes about the topic. For a single image input, it works like this:

Request for deletion

About

MC.AI – Aggregated news about artificial intelligence

MC.AI collects interesting articles and news about artificial intelligence and related areas. The contributions come from various open sources and are presented here in a collected form.

The copyrights are held by the original authors, the source is indicated with each contribution.

Contributions which should be deleted from this platform can be reported using the appropriate form (within the contribution).

MC.AI is open for direct submissions, we look forward to your contribution!

Search on MC.AI

mc.ai aggregates articles from different sources - copyright remains at original authors


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

图解CSS3

图解CSS3

廖伟华 / 机械工业出版社 / 2014-7-1 / CNY 79.00

本书是CSS3领域的标准性著作,由资深Web前端工程师根据CSS3的最新技术标准撰写。内容极为全面、丰富和翔实,由浅入深地讲解了CSS3新特性的语法、功能和使用技巧,涵盖选择器、边框、背景、文本、颜色、UI、动画、新型盒模型、媒体查询、响应式设计等各种模块;写作方式创新,有趣且易懂,用图解的方式来描述CSS3的每一个特性甚至每一个步骤都配有实战效果图;包含大量案例,实战性强,每个特性都有作者从实践......一起来看看 《图解CSS3》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

在线进制转换器
在线进制转换器

各进制数互转换器

html转js在线工具
html转js在线工具

html转js在线工具