Shih-Fu Chang
Person information
- affiliation: Columbia University, New York City, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2010 – today
- 2019
- [j100]Yulei Niu, Zhiwu Lu, Ji-Rong Wen, Tao Xiang, Shih-Fu Chang:
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation. IEEE Trans. Image Processing 28(4): 1720-1731 (2019) - [i50]Zheng Shou, Zhicheng Yan, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Xudong Lin, Shih-Fu Chang:
DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition. CoRR abs/1901.03460 (2019) - 2018
- [j99]Chitta Baral, Shih-Fu Chang, Brian Curless, Partha Dasgupta, Julia Hirschberg, Anita Jones:
Ask not what your postdoc can do for you ... Commun. ACM 61(1): 42-44 (2018) - [j98]Lamberto Ballan, Shih-Fu Chang, Gang Hua, Thomas Mensink, Greg Mori, Rahul Sukthankar:
Guest Editorial. Computer Vision and Image Understanding 173: 1 (2018) - [j97]Yu-Gang Jiang, Zuxuan Wu, Jun Wang, Xiangyang Xue, Shih-Fu Chang:
Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 40(2): 352-364 (2018) - [j96]Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter K. Allen:
Model-Driven Feedforward Prediction for Manipulation of Deformable Objects. IEEE Trans. Automation Science and Engineering 15(4): 1621-1638 (2018) - [j95]B. Prabhakaran, Yu-Gang Jiang, Hari Kalva, Shih-Fu Chang:
Editorial IEEE Transactions on Multimedia Special Section on Video Analytics: Challenges, Algorithms, and Applications. IEEE Trans. Multimedia 20(5): 1037 (2018) - [j94]Yu-Gang Jiang, Zuxuan Wu, Jinhui Tang, Zechao Li, Xiangyang Xue, Shih-Fu Chang:
Modeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification. IEEE Trans. Multimedia 20(11): 3137-3147 (2018) - [c299]Long Chen, Hanwang Zhang, Jun Xiao, Wei Liu, Shih-Fu Chang:
Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial Embedding Networks. CVPR 2018: 1043-1052 - [c298]Hanwang Zhang, Yulei Niu, Shih-Fu Chang:
Grounding Referring Expressions in Images by Variational Context. CVPR 2018: 4158-4166 - [c297]Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang:
AutoLoc: Weakly-Supervised Temporal Action Localization in Untrimmed Videos. ECCV (16) 2018: 162-179 - [c296]Zheng Shou, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giró i Nieto, Shih-Fu Chang:
Online Detection of Action Start in Untrimmed, Streaming Videos. ECCV (3) 2018: 551-568 - [c295]Spencer Whitehead, Heng Ji, Mohit Bansal, Shih-Fu Chang, Clare R. Voss:
Incorporating Background Knowledge into Video Description Generation. EMNLP 2018: 3992-4001 - [c294]Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, Shih-Fu Chang:
Entity-aware Image Caption Generation. EMNLP 2018: 4013-4023 - [c293]Hongzhi Li, Joseph G. Ellis, Lei Zhang, Shih-Fu Chang:
PatternNet: Visual Pattern Mining with Deep Neural Network. ICMR 2018: 291-299 - [c292]Xavier Alameda-Pineda, Miriam Redi, Nicu Sebe, Shih-Fu Chang, Jiebo Luo:
EE-USAD: ACM MM 2018Workshop on UnderstandingSubjective Attributes of Data focus on Evoked Emotions. ACM Multimedia 2018: 2127-2128 - [c291]Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang:
Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks. NeurIPS 2018: 983-993 - [i49]Zheng Shou, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giró i Nieto, Shih-Fu Chang:
Online Action Detection in Untrimmed, Streaming Videos - Modeling and Evaluation. CoRR abs/1802.06822 (2018) - [i48]Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, Shih-Fu Chang:
Entity-aware Image Caption Generation. CoRR abs/1804.07889 (2018) - [i47]Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang:
AutoLoc: Weakly-supervised Temporal Action Localization. CoRR abs/1807.08333 (2018) - [i46]Philipp Blandfort, Desmond Patton, William R. Frey, Svebor Karaman, Surabhi Bhargava, Fei-Tzin Lee, Siddharth Varia, Chris Kedzie, Michael B. Gaskell, Rossano Schifanella, Kathleen R. McKeown, Shih-Fu Chang:
Multimodal Social Media Analysis for Gang Violence Prevention. CoRR abs/1807.08465 (2018) - [i45]Xu Zhang, Felix X. Yu, Svebor Karaman, Wei Zhang, Shih-Fu Chang:
Heated-Up Softmax Embedding. CoRR abs/1809.04157 (2018) - [i44]Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang:
Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks. CoRR abs/1810.11730 (2018) - [i43]Yuan Liu, Lin Ma, Yifeng Zhang, Wei Liu, Shih-Fu Chang:
Multi-granularity Generator for Temporal Action Proposal. CoRR abs/1811.11524 (2018) - [i42]Hassan Akbari, Svebor Karaman, Surabhi Bhargava, Brian Chen, Carl Vondrick, Shih-Fu Chang:
Multi-level Multimodal Common Semantic Space for Image-Phrase Grounding. CoRR abs/1811.11683 (2018) - [i41]Long Chen, Hanwang Zhang, Jun Xiao, Xiangnan He, Shiliang Pu, Shih-Fu Chang:
Scene Dynamics: Counterfactual Critic Multi-Agent Training for Scene Graph Generation. CoRR abs/1812.02347 (2018) - 2017
- [j93]Nikolaos Pappas, Miriam Redi, Mercan Topkara, Hongyi Liu, Brendan Jou, Tao Chen, Shih-Fu Chang:
Multilingual visual sentiment concept clustering and analysis. IJMIR 6(1): 51-70 (2017) - [j92]Mohammad Soleymani, Björn W. Schuller, Shih-Fu Chang:
Guest editorial: Multimodal sentiment analysis and mining in the wild. Image Vision Comput. 65: 1-2 (2017) - [j91]Mohammad Soleymani, David García, Brendan Jou, Björn W. Schuller, Shih-Fu Chang, Maja Pantic:
A survey of multimodal sentiment analysis. Image Vision Comput. 65: 3-14 (2017) - [j90]Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang:
On Binary Embedding using Circulant Matrices. Journal of Machine Learning Research 18: 150:1-150:30 (2017) - [j89]Xianglong Liu, Junfeng He, Shih-Fu Chang:
Hash Bit Selection for Nearest Neighbor Search. IEEE Trans. Image Processing 26(11): 5367-5380 (2017) - [c290]Pascal Mettes, Cees Snoek, Shih-Fu Chang:
Localizing Actions from Video Labels and Pseudo-Annotations. BMVC 2017 - [c289]Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, Shih-Fu Chang:
CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos. CVPR 2017: 1417-1426 - [c288]Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, Tat-Seng Chua:
Visual Translation Embedding Network for Visual Relation Detection. CVPR 2017: 3107-3115 - [c287]Xu Zhang, Felix X. Yu, Svebor Karaman, Shih-Fu Chang:
Learning Discriminative and Transformation Covariant Local Feature Detectors. CVPR 2017: 4923-4931 - [c286]Hanwang Zhang, Zawlin Kyaw, Jinyang Yu, Shih-Fu Chang:
PPR-FCN: Weakly Supervised Visual Relation Detection via Parallel Pairwise R-FCN. ICCV 2017: 4243-4251 - [c285]Xu Zhang, Felix X. Yu, Sanjiv Kumar, Shih-Fu Chang:
Learning Spread-Out Local Feature Descriptors. ICCV 2017: 4605-4613 - [c284]Delia Fernandez, Alejandro Woodward, Victor Campos, Xavier Giró i Nieto, Brendan Jou, Shih-Fu Chang:
More Cat than Cute?: Interpretable Prediction of Adjective-Noun Pairs. MUSA2@MM 2017: 61-69 - [c283]Tongtao Zhang, Spencer Whitehead, Hanwang Zhang, Hongzhi Li, Joseph G. Ellis, Lifu Huang, Wei Liu, Heng Ji, Shih-Fu Chang:
Improving Event Extraction via Multimodal Integration. ACM Multimedia 2017: 270-278 - [c282]Xavier Alameda-Pineda, Miriam Redi, Mohammad Soleymani, Nicu Sebe, Shih-Fu Chang, Samuel D. Gosling:
MUSA2: First ACM Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes. ACM Multimedia 2017: 1974-1975 - [c281]Zuxuan Wu, Yu-Gang Jiang, Larry Davis, Shih-Fu Chang:
LSVC2017: Large-Scale Video Classification Challenge. ACM Multimedia 2017: 1978-1979 - [c280]
- [e8]Xavier Alameda-Pineda, Miriam Redi, Mohammad Soleymani, Nicu Sebe, Shih-Fu Chang, Samuel D. Gosling:
Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes, MUSA2@MM 2017, Mountain View, CA, USA, October 27, 2017. ACM 2017, ISBN 978-1-4503-5509-4 [contents] - [i40]Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, Tat-Seng Chua:
Visual Translation Embedding Network for Visual Relation Detection. CoRR abs/1702.08319 (2017) - [i39]Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, Shih-Fu Chang:
CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos. CoRR abs/1703.01515 (2017) - [i38]Hongzhi Li, Joseph G. Ellis, Lei Zhang, Shih-Fu Chang:
PatternNet: Visual Pattern Mining with Deep Neural Network. CoRR abs/1703.06339 (2017) - [i37]Yu-Gang Jiang, Zuxuan Wu, Jinhui Tang, Zechao Li, Xiangyang Xue, Shih-Fu Chang:
Modeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification. CoRR abs/1706.04508 (2017) - [i36]Pascal Mettes, Cees G. M. Snoek, Shih-Fu Chang:
Localizing Actions from Video Labels and Pseudo-Annotations. CoRR abs/1707.09143 (2017) - [i35]Hanwang Zhang, Zawlin Kyaw, Jinyang Yu, Shih-Fu Chang:
PPR-FCN: Weakly Supervised Visual Relation Detection via Parallel Pairwise R-FCN. CoRR abs/1708.01956 (2017) - [i34]Du Tran, Jamie Ray, Zheng Shou, Shih-Fu Chang, Manohar Paluri:
ConvNet Architecture Search for Spatiotemporal Feature Learning. CoRR abs/1708.05038 (2017) - [i33]Delia Fernandez, Alejandro Woodward, Victor Campos, Xavier Giró i Nieto, Brendan Jou, Shih-Fu Chang:
More cat than cute? Interpretable Prediction of Adjective-Noun Pairs. CoRR abs/1708.06039 (2017) - [i32]Xu Zhang, Felix X. Yu, Sanjiv Kumar, Shih-Fu Chang:
Learning Spread-out Local Feature Descriptors. CoRR abs/1708.06320 (2017) - [i31]Victor Campos, Brendan Jou, Xavier Giró i Nieto, Jordi Torres, Shih-Fu Chang:
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks. CoRR abs/1708.06834 (2017) - [i30]Yulei Niu, Zhiwu Lu, Ji-Rong Wen, Tao Xiang, Shih-Fu Chang:
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation. CoRR abs/1709.01220 (2017) - [i29]Hanwang Zhang, Yulei Niu, Shih-Fu Chang:
Grounding Referring Expressions in Images by Variational Context. CoRR abs/1712.01892 (2017) - [i28]Long Chen, Hanwang Zhang, Jun Xiao, Wei Liu, Shih-Fu Chang:
Zero-Shot Visual Recognition using Semantics-Preserving Adversarial Embedding Network. CoRR abs/1712.01928 (2017) - 2016
- [j88]Jun Wang, Wei Liu, Sanjiv Kumar, Shih-Fu Chang:
Learning to Hash for Indexing Big Data - A Survey. Proceedings of the IEEE 104(1): 34-57 (2016) - [c279]Di Lu, Xiaoman Pan, Nima Pourdamghani, Shih-Fu Chang, Heng Ji, Kevin Knight:
A Multi-media Approach to Cross-lingual Entity Knowledge Transfer. ACL (1) 2016 - [c278]Jie Feng, Brian L. Price, Scott Cohen, Shih-Fu Chang:
Interactive Segmentation on RGBD Images via Cue Selection. CVPR 2016: 156-164 - [c277]Zheng Shou, Dongang Wang, Shih-Fu Chang:
Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs. CVPR 2016: 1049-1058 - [c276]Yan Wang, Sunghyun Cho, Jue Wang, Shih-Fu Chang:
PanoSwarm: Collaborative and Synchronized Multi-Device Panoramic Photography. IUI 2016: 261-270 - [c275]
- [c274]Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu, Tao Chen, Shih-Fu Chang:
Multilingual Visual Sentiment Concept Matching. ICMR 2016: 151-158 - [c273]Brendan Jou, Margaret Yuying Qian, Shih-Fu Chang:
SentiCart: Cartography and Geo-contextualization for Multilingual Visual Sentiment. ICMR 2016: 389-392 - [c272]Emily Song, Joseph G. Ellis, Hongzhi Li, Shih-Fu Chang:
Watching What and How Politicians Discuss Various Topics: A Large-Scale Video Analytics UI. ICMR 2016: 401-404 - [c271]Hongyi Liu, Brendan Jou, Tao Chen, Mercan Topkara, Nikolaos Pappas, Miriam Redi, Shih-Fu Chang:
Complura: Exploring and Leveraging a Large-scale Multilingual Visual Sentiment Ontology. ICMR 2016: 417-420 - [c270]Joseph G. Ellis, Svebor Karaman, Hongzhi Li, Hong Bin Shim, Shih-Fu Chang:
Placing Broadcast News Videos in their Social Media Context Using Hashtags. ACM Multimedia 2016: 684-688 - [c269]Hongzhi Li, Joseph G. Ellis, Heng Ji, Shih-Fu Chang:
Event Specific Multimodal Pattern Mining for Knowledge Base Construction. ACM Multimedia 2016: 821-830 - [c268]Brendan Jou, Shih-Fu Chang:
Deep Cross Residual Learning for Multitask Visual Recognition. ACM Multimedia 2016: 998-1007 - [c267]Bingchen Gong, Brendan Jou, Felix X. Yu, Shih-Fu Chang:
Tamp: A Library for Compact Deep Neural Networks with Structured Matrices. ACM Multimedia 2016: 1206-1209 - [c266]Di Lu, Clare R. Voss, Fangbo Tao, Xiang Ren, Rachel Guan, Rostyslav Korolov, Tongtao Zhang, Dongang Wang, Hongzhi Li, Taylor Cassidy, Heng Ji, Shih-Fu Chang, Jiawei Han, William A. Wallace, James A. Hendler, Mei Si, Lance M. Kaplan:
Cross-media Event Extraction and Recommendation. HLT-NAACL Demos 2016: 72-76 - [c265]
- [i27]Hongzhi Li, Joseph G. Ellis, Shih-Fu Chang:
Event Specific Multimodal Pattern Mining with Image-Caption Pairs. CoRR abs/1601.00022 (2016) - [i26]Zheng Shou, Dongang Wang, Shih-Fu Chang:
Action Temporal Localization in Untrimmed Videos via Multi-stage CNNs. CoRR abs/1601.02129 (2016) - [i25]Brendan Jou, Shih-Fu Chang:
Deep Cross Residual Learning for Multitask Visual Recognition. CoRR abs/1604.01335 (2016) - [i24]Ran Tao, Arnold W. M. Smeulders, Shih-Fu Chang:
Generic Instance Search and Re-identification from One Example via Attributes and Categories. CoRR abs/1605.07104 (2016) - [i23]Dongang Wang, Zheng Shou, Hongyi Liu, Shih-Fu Chang:
EventNet Version 1.1 Technical Report. CoRR abs/1605.07289 (2016) - [i22]Brendan Jou, Shih-Fu Chang:
Going Deeper for Multilingual Visual Sentiment Detection. CoRR abs/1605.09211 (2016) - [i21]Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu, Tao Chen, Shih-Fu Chang:
Multilingual Visual Sentiment Concept Matching. CoRR abs/1606.02276 (2016) - [i20]
- [i19]Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter K. Allen:
Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects. CoRR abs/1607.04411 (2016) - [i18]Vaidehi Dalmia, Hongyi Liu, Shih-Fu Chang:
Columbia MVSO Image Sentiment Dataset. CoRR abs/1611.04455 (2016) - 2015
- [j87]Shih-Fu Chang, Thomas S. Huang, Michael S. Lew, Bart Thomee:
Special issue on concept detection with big data. IJMIR 4(2): 73-74 (2015) - [j86]Jae-Pil Heo, Youngwoon Lee, Junfeng He, Shih-Fu Chang, Sung-Eui Yoon:
Spherical Hashing: Binary Code Embedding with Hyperspheres. IEEE Trans. Pattern Anal. Mach. Intell. 37(11): 2304-2316 (2015) - [j85]Yan-Ying Chen, Tao Chen, Taikun Liu, Hong-Yuan Mark Liao, Shih-Fu Chang:
Assistive Image Comment Robot - A Novel Mid-Level Concept-Based Representation. IEEE Trans. Affective Computing 6(3): 298-311 (2015) - [j84]Kuan-Ting Lai, Dong Liu, Shih-Fu Chang, Ming-Syan Chen:
Learning Sample Specific Weights for Late Fusion. IEEE Trans. Image Processing 24(9): 2772-2783 (2015) - [j83]Yu-Gang Jiang, Qi Dai, Tao Mei, Yong Rui, Shih-Fu Chang:
Super Fast Event Recognition in Internet Videos. IEEE Trans. Multimedia 17(8): 1174-1186 (2015) - [j82]Christoph Kofler, Subhabrata Bhattacharya, Martha Larson, Tao Chen, Alan Hanjalic, Shih-Fu Chang:
Uploader Intent for Online Video: Typology, Inference, and Applications. IEEE Trans. Multimedia 17(8): 1200-1212 (2015) - [c264]Wei Liu, Cun Mu, Rongrong Ji, Shiqian Ma, John R. Smith, Shih-Fu Chang:
Low-Rank Similarity Metric Learning in High Dimensions. AAAI 2015: 2792-2799 - [c263]Ran Tao, Arnold W. M. Smeulders, Shih-Fu Chang:
Attributes and categories for generic instance search from one example. CVPR 2015: 177-186 - [c262]Xiao-Ming Wu, Zhenguo Li, Shih-Fu Chang:
New insights into Laplacian similarity search. CVPR 2015: 1949-1957 - [c261]Tongtao Zhang, Hongzhi Li, Heng Ji, Shih-Fu Chang:
Cross-document Event Coreference Resolution based on Cross-media Features. EMNLP 2015: 201-206 - [c260]Yu Cheng, Felix X. Yu, Rogério Schmidt Feris, Sanjiv Kumar, Alok N. Choudhary, Shih-Fu Chang:
An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections. ICCV 2015: 2857-2865 - [c259]Xu Zhang, Felix X. Yu, Ruiqi Guo, Sanjiv Kumar, Shengjin Wang, Shih-Fu Chang:
Fast Orthogonal Projection Based on Kronecker Product. ICCV 2015: 2929-2937 - [c258]Yinxiao Li, Danfei Xu, Yonghao Yue, Yan Wang, Shih-Fu Chang, Eitan Grinspun, Peter K. Allen:
Regrasping and unfolding of garments using predictive thin shell modeling. ICRA 2015: 1382-1388 - [c257]Masoud Mazloom, AmirHossein Habibian, Dong Liu, Cees G. M. Snoek, Shih-Fu Chang:
Encoding Concept Prototypes for Video Event Detection and Summarization. ICMR 2015: 123-130 - [c256]Shih-Fu Chang:
Words and Pictures: Crowdsource Discovery beyond Image Semantics. CrowdMM@ACM Multimedia 2015: 1-2 - [c255]Shih-Fu Chang, Matthew Cooper, Denver Dash, Funda Kivran-Swaine, Jia Li, David A. Shamma:
Opportunities and Challenges of Industry-Academic Collaborations in Multimedia Research. ACM Multimedia 2015: 45 - [c254]Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, Shih-Fu Chang:
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology. ACM Multimedia 2015: 159-168 - [c253]Guangnan Ye, Yitong Li, Hongliang Xu, Dong Liu, Shih-Fu Chang:
EventNet: A Large Scale Structured Concept Library for Complex Event Detection in Video. ACM Multimedia 2015: 471-480 - [c252]Hongliang Xu, Guangnan Ye, Yitong Li, Dong Liu, Shih-Fu Chang:
Large Video Event Ontology Browsing, Search and Tagging (EventNet Demo). ACM Multimedia 2015: 803-804 - [c251]Francesco Gelli, Tiberio Uricchio, Marco Bertini, Alberto Del Bimbo, Shih-Fu Chang:
Image Popularity Prediction in Social Media Using Sentiment and Context Features. ACM Multimedia 2015: 907-910 - [c250]Mohamed R. Amer, Ajay Divakaran, Shih-Fu Chang, Nicu Sebe:
2nd Workshop on Computational Models of Social Interactions: Human-Computer-Media Communication (HCMC2015). ACM Multimedia 2015: 1343-1344 - [c249]Mohammad Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang:
ASM'15: The 1st International Workshop on Affect and Sentiment in Multimedia. ACM Multimedia 2015: 1349 - [c248]Pedro A. Szekely, Craig A. Knoblock, Jason Slepicka, Andrew Philpot, Amandeep Singh, Chengye Yin, Dipsy Kapoor, Prem Natarajan, Daniel Marcu, Kevin Knight, David Stallard, Subessware S. Karunamoorthy, Rajagopal Bojanapalli, Steven Minton, Brian Amanatullah, Todd Hughes, Mike Tamayo, David Flynt, Rachel Artiss, Shih-Fu Chang, Tao Chen, Gerald Hiebel, Lidia Ferreira:
Building and Using a Knowledge Graph to Combat Human Trafficking. International Semantic Web Conference (2) 2015: 205-221 - [c247]Pedro A. Szekely, Craig A. Knoblock, Jason Slepicka, Chengye Yin, Andrew Philpot, Amandeep Singh, Dipsy Kapoor, Prem Natarajan, Daniel Marcu, Kevin Knight, David Stallard, Subessware S. Karunamoorthy, Rajagopal Bojanapalli, Steven Minton, Brian Amanatullah, Todd Hughes, Mike Tamayo, David Flynt, Rachel Artiss, Shih-Fu Chang, Tao Chen, Gerald Hiebel, Lidia Ferreira:
Using a Knowledge Graph to Combat Human Trafficking. International Semantic Web Conference (Posters & Demos) 2015 - [e7]Mohammad Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang:
Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia, ASM 2015, Brisbane, Australia, October 30, 2015. ACM 2015, ISBN 978-1-4503-3750-2 [contents] - [e6]Mohamed R. Amer, Ajay Divakaran, Shih-Fu Chang, Nicu Sebe:
Proceedings of the 2nd Workshop on Computational Models of Social Interactions: Human-Computer-Media Communication, HCMC 2015, Brisbane, Australia, October 30, 2015. ACM 2015, ISBN 978-1-4503-3747-2 [contents] - [i17]Yu Cheng, Felix X. Yu, Rogério Schmidt Feris, Sanjiv Kumar, Alok N. Choudhary, Shih-Fu Chang:
Fast Neural Networks with Circulant Projections. CoRR abs/1502.03436 (2015) - [i16]Yu-Gang Jiang, Zuxuan Wu, Jun Wang, Xiangyang Xue, Shih-Fu Chang:
Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks. CoRR abs/1502.07209 (2015) - [i15]Xu Zhang, Felix X. Yu, Shih-Fu Chang, Shengjin Wang:
Deep Transfer Network: Unsupervised Domain Adaptation. CoRR abs/1503.00591 (2015) - [i14]Felix X. Yu, Sanjiv Kumar, Henry A. Rowley, Shih-Fu Chang:
Compact Nonlinear Maps and Circulant Extensions. CoRR abs/1503.03893 (2015) - [i13]Guangnan Ye, Yitong Li, Hongliang Xu, Dong Liu, Shih-Fu Chang:
EventNet: A Large Scale Structured Concept Library for Complex Event Detection in Video. CoRR abs/1506.02328 (2015) - [i12]Yan Wang, Sunghyun Cho, Jue Wang, Shih-Fu Chang:
PanoSwarm: Collaborative and Synchronized Multi-Device Panoramic Photography. CoRR abs/1507.01147 (2015) - [i11]Yan Wang, Jue Wang, Shih-Fu Chang:
CamSwarm: Instantaneous Smartphone Camera Arrays for Collaborative Photography. CoRR abs/1507.01148 (2015) - [i10]Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, Shih-Fu Chang:
Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology. CoRR abs/1508.03868 (2015) - [i9]Jun Wang, Wei Liu, Sanjiv Kumar, Shih-Fu Chang:
Learning to Hash for Indexing Big Data - A Survey. CoRR abs/1509.05472 (2015) - [i8]Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang:
On Binary Embedding using Circulant Matrices. CoRR abs/1511.06480 (2015) - 2014
- [j81]I-Hong Jhuo, Guangnan Ye, Shenghua Gao, Dong Liu, Yu-Gang Jiang, D. T. Lee, Shih-Fu Chang:
Discovering joint audio-visual codewords for video event detection. Mach. Vis. Appl. 25(1): 33-47 (2014) - [j80]Xianglong Liu, Yadong Mu, Bo Lang, Shih-Fu Chang:
Mixed image-keyword query adaptive hashing over multilabel images. TOMCCAP 10(2): 22:1-22:21 (2014) - [c246]