内容简介:KDD 2019 包括两个 track:Research track 和 Applied Data Science track。今年的 KDD Research track 共评审约 1200 篇投稿,其中约 110 篇 oral 论文,60 篇 poster 论文,接收率约 14%,比往年的 17%~18% 还要下降了近 4 个百分点。此前 3 年 KDD Research track 的录用情况分别是:投稿 983 篇,收录 178 篇(2018);投稿748 篇,收录 130 篇(2017);投稿 7
KDD 2019 包括两个 track:Research track 和 Applied Data Science track。
今年的 KDD Research track 共评审约 1200 篇投稿,其中约 110 篇 oral 论文,60 篇 poster 论文,接收率约 14%,比往年的 17%~18% 还要下降了近 4 个百分点。此前 3 年 KDD Research track 的录用情况分别是:投稿 983 篇,收录 178 篇(2018);投稿748 篇,收录 130 篇(2017);投稿 784篇,收录142 篇(2016)。
而此次 ADS track 约投稿 700 篇,其中 45 篇 oral 论文,100 篇 poster 论文。
学术君今天为大家推荐的是清华大学和京东发表于KDD 2019的工作。
-
论文题目
Reinforcement Learning to Optimize Long-term User Engagement in Recommender Systems
-
作者
Lixin Zou, Long Xia, Zhuoye Ding, Jiaxing Song, Weidong Liu, Dawei Yin
-
会议/年份
KDD 2019
-
链接
http://export.arxiv.org/abs/1902.05570
-
Abstract
Recommender systems play a crucial role in our daily lives. Feed streaming mechanism has been widely used in the recommender system, especially on the mobile Apps. The feed streaming setting provides users the interactive manner of recommendation in never-ending feeds. In such an interactive manner, a good recommender system should pay more attention to user stickiness, which is far beyond classical instant metrics, and typically measured by {\bf long-term user engagement}. Directly optimizing the long-term user engagement is a non-trivial problem, as the learning target is usually not available for conventional supervised learning methods. Though reinforcement learning~(RL) naturally fits the problem of maximizing the long term rewards, applying RL to optimize long-term user engagement is still facing challenges: user behaviors are versatile and difficult to model, which typically consists of both instant feedback~(\eg clicks, ordering) and delayed feedback~(\eg dwell time, revisit); in addition, performing effective off-policy learning is still immature, especially when combining bootstrapping and function approximation.
To address these issues, in this work, we introduce a reinforcement learning framework --- FeedRec to optimize the long-term user engagement. FeedRec includes two components: 1)~a Q-Network which designed in hierarchical LSTM takes charge of modeling complex user behaviors, and 2)~an S-Network, which simulates the environment, assists the Q-Network and voids the instability of convergence in policy learning. Extensive experiments on synthetic data and a real-world large scale data show that FeedRec effectively optimizes the long-term user engagement and outperforms state-of-the-arts.
推荐理由
本文是清华大学和京东发表于 KDD 2019 的工作。论文针对利用强化学习解决推荐系统时存在用户行为难以建模的问题,提出了一种新的强化学习框架 FeedRec,包括两个网络:Q 网络利用层次化 LSTM 对复杂用户行为建模,S 网络用来模拟环境,辅助和稳定 Q 网络的训练。方法在合成数据和真实数据上进行了验证,取得了 SOTA 的结果。
传送门:
论文地址:
http://export.arxiv.org/pdf/1902.05570
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:- Java一个全新的微服务框架(RedKale)
- WWDC19 苹果宣布全新的 UI 框架 SwiftUI
- CVPR 2019 | 全新缺失图像数据插补框架—CollaGAN
- Teleport 2.0 一个 Golang TCP Socket 的全新框架
- 苹果发布全新 Swift UI 框架:一次编码,五端通用
- Amazon全新轻量级服务器端Swift框架:Smoke
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。