This paper studies the problem of offloading an application consisting of dependent tasks in multi-access edge computing (MEC). This problem is challenging because multiple conflicting objectives exist, e.g., the completion time, energy consumption, and computation overhead should be optimized simultaneously. Recently, some reinforcement learning (RL) based methods have been proposed to address the problem. However, these methods, called single-objective RLs (SORLs), define the user utility as a linear scalarization. The conflict between objectives has been ignored. This paper formulates a multi-objective optimization problem to simultaneously minimize the application completion time, energy consumption of the mobile device, and usage charge for edge computing, subject to dependency constraints. Moreover, the relative importance (preferences) between the objectives may change over time in MEC, making it quite challenging for traditional SORLs to handle. To overcome this, we first model a multi-objective Markov decision process, where the scalar reward is extended to a vector-valued reward. Each element in the reward corresponds to one of the objectives. Then, we propose an improved multi-objective reinforcement learning (MORL) algorithm, where a tournament selection scheme is designed to select important preferences to effectively maintain previously learned policies. The simulation results demonstrate that the proposed algorithm obtains a good tradeoff between three objectives and has significant performance improvement compared with a number of existing algorithms.