Here I share some of the amazing reviews we have received over the years from the top conferences and journals to inspire fellow colleagues and young researchers. Hopefully, take home messages in the end of each review should guide you on how to publish your papers successfully.

British Machine Vision Conference 2018 (BMVC)
In my opinions it is quite evident that normal maps will improve the overall quality of intrinsic imaga decomposition. This part of the paper does not feel like a proper contribution. (Borderline/Reject)

Know that the main role of a good scientific publication is to surprise.

IEEE Conference on Computer Vision and Pattern Recognition 2019 (CVPR)
1) It is unsurprising that normals help since half of the work is already done. The method itself is a new CNN architecture with no particularly deep insight (the fact that normals help is unsurprising). (Borderline)
2) It is rather obvious that the surface normal information provides strong information for intrinsic image decomposition. (Weak Reject)

Same as BMVC2018, do not even bother if your motivations or findings are not surprising. It doesn't matter if the paper is technically sound. There is no value quantifying a finding. Do not waste your time and just focus on the element of surprise.

International Conference on Computer Vision 2019 (ICCV)
1) The novelty of this paper is quite limited. The network structure and dataset are not new. (Weak Reject)

Remember to propose a new architecture and a new dataset every time.

2) I only see very few experiments in 4.3. It only compare with [7]. (Weak Reject)

In this case, there is only one method available in the field to compare, [7]. However, as can be observed, it also does not matter. If there is only method available for comparison, make sure to create new methods to provide additional experiments.

3) In my opinion, authors at least should show quantitative results on indoor images such as IIW. (Weak Reject)
4) No quantitative comparison is provided for the IIW dataset. (Weak Reject)

In this case, our supervised method was trained on outdoor settings, aiming to learn the relevant intrinsic images. On the other hand, IIW is a dataset of indoor scenes with completely different setup and properties. However, it also does not matter. Make sure to provide experimental results that cover all possible settings regardless of your own setup, motivation, assumptions, method or aim.

IEEE Transactions on Image Processing 2019 (TIP)
1) Lack of novelty. This paper makes two modifications on ShapeNet [10], which is published in CVPR2017. (Reject)

There is no point modifying an already available model. Do not even think about it if the model is more than 2 years old. Recall the golden rule of ICCV2019: remember to propose a new architecture and a new dataset every time.

2) For In-the-wild Real World Outdoor Images, there are only qualitative results. I suggest that the authors sparsely annotate a small set of the real world (outdoor) garden dataset, and compare the proposed model. (Reject)

It is an impossible request to annotate real world outdoor dataset in the field of intrinsic image decomposition. Collecting and generating ground-truth real world intrinsic images is only possible in a fully-controlled laboratory setting (can you guess why?). Unfortunately, I do not provide a take home message this time, but I will leave this one to you, dear reader, to think and come up with your novel solution...