Coordinates to Color Quaternion Neural Network (CoCoQNN) and Coordinates to Color Quaternion Convolutional Neural Network (CoCoQCNN) for Image Processing Eduardo Bayro-Corrochano -- Institut of Automatika and Robotics of Polytechnic University, Poznan, Poland In this work, we propose three architectures for mapping pixel coordinates in an image to their corresponding RGB (or grayscale) values. These three architectures are based on the CocoNet (coordinates-to-color network) model: the Coordinates to Color Quaternion Neural Network (CoCoQNN), the Coordinates to Color Convolutional Neural Network (CoCoCNN) and Coordinates to Color Quaternion Convolutional Neural Network (CoCoQCNN). The first model, is a modified CocoNet model using quaternion fully connected layers whereas the last model incorporates quaternion convolutional layers. The proposed quaternion-valued architectures have the advantage of requiring only 25% of trainable parameters when compared to their real-valued counterparts. During the training process, these architectures learn to encode the input image within their layers. At test time, when providing normalized coordinates as input, these architectures will output the approximate RGB (or grayscale) values reconstructing the entire learned image. We conducted experiments using images from the CIFAR10, Set5 and the UCSD retinal OCT datasets, in order to test the proposed models, showing a competitive performance when compared to the baseline CocoNet architecture.