Abstract
Due to the immediacy and the easy way to understand the image content, the applications of digital image have brought great opportunity to the development of social networks. However, there exist some serious problems. Some visual content has been maliciously tampered to achieve illegal purpose, while some modifications are benign, just for fun, for enhancing artistic value, or effectiveness of news dissemination. So beyond the tampering detection, how to evaluate the influence of image tampering is on schedule. In this paper, with the help of forensic tools, we study the problem of automatically assessing the influence of image tampering by examining whether the modification affects the dominant visual content, and utilize saliency mechanism to assess how harmful the tampering is. The experimental results demonstrate the effectiveness of our method.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Nowadays, images play a vital role on online social networks (OSN), since an image says more than a thousand words, and is not restricted by the barrier of language. However, the easy access to image editing software tools makes it extremely simple to alter the content of images, or to create new ones, so that material may be manipulated to serve personal ambition or propaganda, e.g. reusing past images from unrelated contexts, or explicitly tampering with media content. As a result, seeing is no longer believing [1].
Image forensics, which focuses on physical content, is an effective technique to detect whether images are modified. But many image operations are benign, performed for fun or for artistic value, such as visible watermarking in Fig. 1(a), and photo retouching and repair in Fig. 1(b). Most images on the social networks usually have been processed to enhance posting effect or resize, such as image enhancement, median filtering and JPEG compressing, all of which are regarded as doctored images in the field of image forensics, even though the essential content of these images are not changed. It is different from malicious tampering in rumor images. Therefore, how to differentiate innocent operation and malicious tampering is our main focus, that is to say, we want to evaluate the influence of image tampering.
So what features are best for evaluating the importance of tampering content? The first is whether the virtual content is changed, i.e., whether the core and main content are changed. The saliency features are usually seen as important content of an image [2, 3]. So in this paper, based on image forensics techniques, we use saliency mechanism to evaluate the influence of image tampering. Meanwhile, we set some evaluation indexes and corresponding thresholds. If the index of altered content in the saliency map exceeds preset threshold, it is seen as malicious operation, otherwise, it is seen as innocent or uncertain.
In the rest of the paper, after introducing related works about information credibility and image forensics, and our motivation, we describe the proposed method about influence evaluation of image tampering content, and finally we present experimental results and discussion.
2 Related Work
Some microblogs and websites have the function of identifying false information, such as [4,5,6], and most are human-based. However, for today’s OSN big data, human-based method is not effective. Recently there have been many efforts to study information credibility in microblogs, ranging from automated algorithms to model and predict credibility of users [7] and messages [8,9,10]. However, with the exception of [11], little research has focused on image credibility. Also, there is yet no work on content-based image credibility. Image credibility verification on OSN is a fairly challenging problem, and it is based on image forensic technology. In the field of image forensics, there are many excellent algorithms to detect image operations, such as seam carving, copy-move [12,13,14], near-duplicate [15], median filtering [16], JPEG compression [17], and image composition [18]. However, most of these methods do not evaluate the degree of importance of tampered parts.
In Fig. 2, (a) and (c) are downloaded from Internet [19]. Obviously, the (c), which has been retweeted excessively [20], is the malicious doctored version of (a) and (b) is the benign retouching version of (a) and (d)–(f) are saliency maps of (a)–(c) by CA attention algorithm [3], respectively. We can see that innocent retouching alters the image saliency at a tiny scale, while malicious tampering alters the image saliency enormously. Therefore, we will evaluate the influence of image tampering via how much operations affect saliency content of an image.
3 Proposed Method
As shown in the previous parts, we will use saliency mechanism to evaluate the influence of image tampering. Firstly, we give assumptions of our model, then we present 3 indexes to evaluate the importance of modified content, and at last the evaluation pipeline is proposed.
3.1 Assumptions of the Model
The importance evaluation of image tampering is based on following assumptions.
-
(1)
The bigger area of modified content, the stronger influence of the content;
-
(2)
The more salient pixels of modified content, the stronger influence of the content.
3.2 Evaluation Indexes
Based on the assumptions in Sect. 3.1, 3 indexes are proposed in this part. Supposing we have an image of size \(M \times N\) which has been modified, p(i, j) is the probability of corresponding pixel value in position (i, j) and \(v_s(i, j)\) is the saliency value of the pixel in position (i, j).
-
(1)
Ratio of modified area \(R_a\)
$$R_a = \frac{C'}{C}$$where \(C'\) is modified area, and C is original image area.
-
(2)
Ratio of saliency value per unit area \(E_s\)
$$E_s = (\frac{v_s'}{v_s}) / R_a$$where
$$v_s = \sum _{i,j}^{M,N}v_s(i,j)$$$$v_s' = \sum _{i,j}^{M,N}v_s'(i,j)$$\(v_s'(i,j)\) is the saliency value of modified region in position (i, j).
-
(3)
Ratio of saliency information amount per unit area \(E_{cs}\)
$$E_{cs} = (\frac{I_{cs}'}{I_{cs}}) / R_a$$where
$$I_{cs} = \sum _{i,j}^{M,N}I_c(i,j)v_s(i,j)\sum _{i,j}^{M,N}-log_2p(i,j)v_s(i,j)$$$$I_{cs}' = \sum _{i,j}^{M,N}I_c'(i,j)v_s'(i,j)\sum _{i,j}^{M,N}-log_2p'(i,j)v_s'(i,j)$$\(p'(i,j)\) is the probability of corresponding pixel value of modified region in position (i, j).
The tampering influence of an image is defined as a numeric value, and a threshold is used to determine whether it is benign or malicious. For index \(R_a\), the bigger \(R_a\), the stronger influence of the content. For indexes \(R_s\) and \(R_{cs}\), we set confidence value according to confidence level 95%. When the index is in interval [0, 0.95), the image is benign; when in [0.95, 1.05], uncertain; and when in (1.05, \(\beta \)], malicious, where \(\beta > 1.05\).
3.3 Evaluation Pipeline
Given an image, the evaluation pipeline is as follows:
-
(1)
Detecting modified regions by corresponding forensics algorithm;
-
(2)
Generating saliency map via a series of attention models;
-
(3)
Computing the indexes presented in Sect. 3.2;
-
(4)
Classification and analyzing the influence of image tampering.
4 Experiments
In this part, we use forgery detection dataset [14] to evaluate the performance of our method, and utilize 5 classical attention models (Itti [2], CA [3], GBVS [21], SIM [22], SUN [23]) to generate saliency maps respectively. First, we ask 6 forensic experts to assess the tampering images in the dataset and average the scores. Then we use coincidence degree between human-based assessments and evaluation indexes to test the effectiveness of the proposed method.
4.1 Impact of Different Modified Area
In the experimental dataset, the \(R_a\) lies in the interval [1.3%, 13.71%]. Sorting \(R_a\) in ascending order, the human-based assessment score is shown in Fig. 2. According to Sect. 3.1 assumption, the bigger area of modified content, the higher influence of the tampering. However, as Fig. 3 shows, the line chart does not have a explicit tendency or direction. That is to say, in this dataset, there is no direct relationships between the influence of different tampering images and \(R_a\).
4.2 Coincidence Degree
The coincidence degree between human-based assessments and evaluation indexes using different attention models is shown in Table 1.
As shown in Table 1, in general, \(E_s\) and \(E_{cs}\) all have fine coincidence degree with human-based tampering assessments using these 5 classical attention models, and the index \(E_{cs}\) has higher coincidence degree than \(E_s\), and CA attention model is better than the other 4 attention models. Then we utilize these 5 attention algorithms on Fig. 2(a), (b) and (c), and get their saliency maps shown in Fig. 4. We can see that CA saliency map of the retouching image is most similar to that of the original image, and also could differentiate malicious tampering one.
4.3 Different-Level Modification Within One Image
To further demonstrate the effectiveness of our proposed method, we alter an image with different modifications as shown in Fig. 5. There are 9 images with different modification from Fig. 5. Top left image (\(E_s\) and \(E_{cs} > 1\)) is gotten from dataset [14], other 8 images are modified with different number of ships by copy-move operation.
Then CA algorithm are utilized to detect saliency features, and corresponding line chart of indexes \(R_a\), \(\frac{v_s'}{v_s}\) and \(\frac{I_{cs}'}{I_{cs}}\) are shown in Fig. 6.
As Fig. 6 shows, when \(E_s\) and \(E_{cs} > 1\), the bigger \(R_a\), \(\frac{v_s'}{v_s}\) and \(\frac{I_{cs}'}{I_{cs}}\), the higher level modification and the greater influence. This suggests that we can assign an influence score via indexes to each image such that greater influence corresponds to a higher score. In the line chart \(\frac{I_{cs}'}{I_{cs}}\), there appears downward inflection points. The corresponding images shown in Fig. 6 are not easy to differentiate by human eyes at the first glimpse. That is to say, the modification is uncertain about innocent or malicious. Of course, this is just a specific case not covering all situations.
4.4 Classic Doctored Image Testing
In this part, we select two famous doctored images whose semantic content has been heavily altered, and get their saliency maps by CA algorithm, as shown in Fig. 7. We can see that the tampering part (the dashed ellipses) is conspicuous in their saliency maps. That is to say, the saliency content of these two forgery images are both heavily tampered, and it testifies that saliency mechanism could evaluate the influence of image tampering.
5 Conclusion and Future Work
Information with maliciously tampered images on OSN will encourage the propagation of rumors. Giving early alerts of fake news can prevent further spreading of malicious content on social media. Although image tampering analysis cannot solve all problems of rumor dissemination, it could address the issue of information with doctored images as early as possible.
In this work, we attempt to evaluate the importance of tampering content via saliency mechanism and achieve some results. The performance of the method is affected by human-based assessment with a prior knowledge, image content description, subjective factor and so forth. Also, whether attention models could catch the “saliency content” affects the experimental results. In the future work, we will study image tampering influence in the real situation, generating saliency map based on the context inspiring state-of-the-art caption algorithms and text analysis methods to connect image and text.
References
Farid, H.: Digital doctoring: how to tell the real from the fake. Significance 3(4), 162–166 (2006)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)
Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2012)
http://py.qianlong.com. Accessed 9 May 2017
http://service.account.weibo.com. Accessed 9 May 2017
www.snopes.com. Accessed 9 May 2017
Kumar, S., Morstatter, F., Zafarani, R., Liu, H.: Whom should I follow?: identifying relevant users during crises. In: Proceedings of the 24th ACM Conference on Hypertext and Social Media, pp. 139–147. ACM (2013)
Alrubaian, M., Alqurishi, M., Hassan, M., Alamri, A.: A credibility analysis system for assessing information on twitter. IEEE Trans. Dependable Secure Comput. (2016). https://doi.org/10.1109/TDSC.2016.2602338
Jin, Z., Cao, J., Zhang, Y., Luo, J.: News verification by exploiting conflicting social view-points in microblogs. In: Thirtieth AAAI Conference on Artificial Intelligence, pp. 2972–2978. AAAI (2016)
Sikdar, S., Kang, B., Odonovan, J., Höllerer, T., Adalı, S.: Understanding information credi-bility on Twitter. In: International Conference on Social Computing, pp. 19–24. IEEE (2013)
Gupta, A., Lamba, H., Kumaraguru, P., Joshi, A.: Faking Sandy: characterizing and identifying fake images on Twitter during Hurricane Sandy. In: International Conference on World Wide Web, pp. 729–736 (2013)
Cozzolino, D., Poggi, G., Verdoliva, L.: Efficient dense-field copy-move forgery detection. IEEE Trans. Inf. Forensics Secur. 10(11), 2284–2297 (2015)
Ardizzone, E., Bruno, A., Mazzola, G.: Copy-Move forgery detection by matching triangles of keypoints. IEEE Trans. Inf. Forensics Secur. 10(10), 2084–2094 (2015)
Cozzolino, D., Poggi, G., Verdoliva, L.: Copy-move forgery detection based on PatchMatch. In: International Conference on Image Processing, pp. 5312–5316. IEEE (2014)
Oikawa, M.A., Dias, Z., Rocha, A.R., Goldenstein, S.: Manifold learning and spectral clus-tering for image phylogeny forests. IEEE Trans. Inf. Forensics Secur. 11(1), 5–18 (2016)
Fan, W., Wang, K., Cayre, F., Xiong, Z.: Median filtered image quality enhancement and anti-forensics via variational deconvolution. IEEE Trans. Inf. Forensics Secur. 10(5), 1076–1091 (2015)
Wang, W., Dong, J., Tan, T.N.: Exploring DCT coefficient quantization effects for local tampering detection. IEEE Trans. Inf. Forensics Secur. 9(10), 1653–1666 (2014)
Peng, B., Wang, W., Dong, J., Tan, T.N.: Optimized 3D lighting environment estimation for image forgery detection. IEEE Trans. Inf. Forensics Secur. 12(2), 479–494 (2017)
http://dailyhive.com/toronto/places-in-toronto-to-watch-u-s-election-2016. Accessed 9 May 2017
https://twitter.com/CNNPolitics/status/726144622083358720. Accessed 9 May 2017
Schölkopf, B., Platt, J., Hofmann, T.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems (NIPS), pp. 545–552 (2006)
Murray, N., Vanrell, M., Otazu, X., Parraga, C.A.: Saliency estimation using a non-parametric low-level vision model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 433–440 (2011)
Zhang, L., Tong, M., Marks, T., Shan, H., Cottrell, G.: SUN: a Bayesian framework for sali-ency using natural statistics. J. Vis. 8(7), 1–20 (2008)
Acknowledgement
This work is supported by China Postdoctoral Science Foundation funded project (No. 2016M601168), Science and technology research project of Heilongjiang Education Department (No. 12521092), NSFC (No. U1536120, U1636201, 61502496) and the National Key Research, Development Program of China (No. 2016YFB1001003).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ye, K., Sun, X., Xu, J., Dong, J., Tan, T. (2017). Influence Evaluation for Image Tampering Using Saliency Mechanism. In: Yang, J., et al. Computer Vision. CCCV 2017. Communications in Computer and Information Science, vol 771. Springer, Singapore. https://doi.org/10.1007/978-981-10-7299-4_43
Download citation
DOI: https://doi.org/10.1007/978-981-10-7299-4_43
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-7298-7
Online ISBN: 978-981-10-7299-4
eBook Packages: Computer ScienceComputer Science (R0)