Modelling the lanuage of the immune system with machine learning (first steps)

栏目: IT技术 · 发布时间: 5年前

内容简介:The full set of antibodies and immune receptors in an individual contains traces of past and current immune responses. These traces can serve as biomarkers for diseases mediated by the adaptive immune system (e.g. infectious disease, organ rejection, autoi

Click here for our improved statistical classifier for immune repertoires, Dynamic Kernel Matching

Statistical classifiers for diagnosing disease from immune repertoires

LABORATORY OF DR. LINDSAY COWELL

Description

The full set of antibodies and immune receptors in an individual contains traces of past and current immune responses. These traces can serve as biomarkers for diseases mediated by the adaptive immune system (e.g. infectious disease, organ rejection, autoimmune disease, cancer). Only a handful of immune receptors that can be sequenced from a patient are expected to contain these traces. Here we present the source code to a method for elucidating these traces.

First, the CDR3 is parsed from every antibody sequence in a patient (see VDJ Server ). The CDR3 is then cut into fixed-length subsequences that we call snippets. These are nothing more than the k-mers of the CDR3. The amino acid residues of each snippet are then described by their biochemical properties in a position dependent manner using Atchley factors .

The main idea is to score every snippet by its biochemical features with a dectector function and to aggregate the scores into a single value that can represent a diagnosis. Because only a handful of snippets are expected to have a high score in patients with a disease, we aggregate the scores together by taking the maximum score. The maximum score is then used to predict the probability that a patient has a positive diagnosis (a high score would suggest a positive diagnosis, no high scores would suggest a negative diagnosis). The parameters of the detector function are fitted by maximizing the log-likelihood (minimizing the cross-entropy error) that each diagnosis is correct.

The model is fitted to the training data using gradient based optimization techniques. First, initial values are randomly drawn for each parameter. Then 2,500 steps of gradient based optimization are used to find a locally optimal fit to the data. We find that the fitting procedure must be repeated hundreds of thousands of times to find a good fit to the training data. Using TensorFlow, the fitting procedure is run repeatedly in parallel on a GPU. We call each thread a "replica", and the "replica" with the best fit to the training data is then scored on unseen and unused data.

For a complete description of this approach, see our publication in BMC Bioinformatics:

Requirements

Download

  • Download: zip
  • Git: git clone https://github.com/jostmey/MaxSnippetModel

Primary Files

  • model.py
  • train.py
  • score.py
  • dataplumbing.py (Data used to develop the approach cannot be made available at this time)
  • dataplumbing_synthetic_data.py (Overwrite dataplumbing.py with this file to see how the model performs on synthetic data)

Update

Improved repertoire classification models are published under:


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Code

Code

Charles Petzold / Microsoft Press / 2000-10-21 / USD 29.99

Paperback Edition What do flashlights, the British invasion, black cats, and seesaws have to do with computers? In CODE, they show us the ingenious ways we manipulate language and invent new means of ......一起来看看 《Code》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具