The complex nature of real-world problems calls for heterogeneity in both machine learning (ML) models and hardware systems. For the algorithm, the heterogeneity in ML models comes from the multi-sensor perceiving and multi-task learning, i.e., multi-modality multi-task (MMMT) models, resulting in diverse deep neural net- work (DNN) layers and computation patterns. For the system, it becomes prevailing to integrate dedicated acceleration components into one system. It thus introduces a new problem, heterogeneous model to heterogeneous system mapping (H2H), in which both computation and communication efficiency need to be considered. While previous mapping algorithms only focus on computation patterns, in this work, we propose a novel mapping algorithm with both computation and communication awareness. By slightly sacrificing computation efficiency, the communication latency is largely reduced. Therefore, the system overall performance is improved and energy is also reduced. The superior performance of our work is evaluated on MAESTRO, achieving 15%-74% latency improvement and 23%-64% energy reduction when compared with the existing computation-prioritized mapping algorithm.