Collectible card games (CCGs), such as Magic: the Gathering and Hearthstone, are played by tens of millions of players worldwide, and their vast state and action spaces, intricate rules and diverse cards make them challenging for humans and artificial intelligence (AI) agents alike. In them, players build a deck using cards that representcreatures, items or spells from a fantasy world and use it to battle other players. Therefore, to win, players must be proficient in two interdependent tasks: deck building and battling. The advent of strong and fast AI players would enable, for instance, thorough playtesting of new cards before they are made available to the public, which is a longstanding problem in the CCG industry. In this thesis, we present a deep reinforcement learning approach for deck-building in the arena mode – an understudied game mode present in most commercial collectible card games. In arena, players build decks immediately before battling by drafting one card at a time from randomly presented candidates. We formulate the problem in a game-agnostic manner and investigate three approaches that differ on how to consider the cards drafted so far in the next choices, using different game state representations and types of neural networks. We perform experiments on Legends of Code and Magic, a collectible card game designed for AI research. Considering the win rate of the decks when used by fixed battling AIs, the results show that our trained draft agents outperform the best draft agents of the game, and do so by building very different decks. Moreover, a participant of the Strategy Card Game AIcompetition improves from tenth to fourth place when using our best draft agent to build decks. We conclude with a discussion on the results, contributions and limitations of this work as well as directions for future research.