News from participating projectsHere lists the interesting papers/tools/research ideas I found recently. As I also post these references to my participating projects, the right sidebar show RSSs to these projects' site for these posts. Vis  Graphics
 Computer Vision / Image processing
 HPC  Submission deadline

 SIGGRAPH '15

  Jan. 19, 2015
 IEEE Vis '15


  Mar. 21/Mar. 31, 2015    ACM MM '15
  Mar. 31, 2015
    SC '15
 Apr. 3/Apr. 10 > Apr. 17, 2015
   ACM MM '15 short paper   Apr. 30, 2015

 PG '15 
  May 8/May 15, 2015 
 SIGGRAPH Asia '15 
  Jun. 2, 2015     LDAV '15  Jun. 24, 2015
 PacificVis '15 

  Sep. 26, 2014     IPDPS '15
 Oct. 10/OCt. 17, 2014

 I3D '15

  Oct. 21, 2014    CVPR '15
  Nov. 14, 2014
 EuroVis '15 

  Nov. 30/Dec. 05, 2014 

posted Jul 8, 2015, 6:22 AM by TengYok Lee
I haven't followed SIGGRAPH papers for a while. I am surprising to see that there are still sections about video processing, timelapsed videos, and light fields, which are the topics I always want to study more. It should be a good timing to follow up. I will check papers in these sections first. REF: http://kesen.realtimerendering.com/sig2015.html

posted Jun 28, 2015, 10:47 AM by TengYok Lee
[
updated Jul 8, 2015, 6:18 AM
]
Visualizing diffusion tensor MR images using streamtubes and streamsurfaces Song Zhang; Demiralp, C.; Laidlaw, D.H., IEEE Transactions on Visualization and Computer Graphics. 9(4): pp.454,462, Oct.Dec. 2003. pdf
Exploring Connectivity of the Brain’s White Matter with
Dynamic Queries. Anthony Sherbondy, David Akers, Rachel Mackenzie, Robert Dougherty, and Brian Wandell
DTI Fiber Clustering in the Whole Brain Song Zhang and David Leidlow In IEEE Vis 2004 (poster session)
Fast and reproducible fiber bundle selection in DTI visualization Blaas, J.; Botha, C.P.; Peters, B.; Vos, F.M.; Post, F.H. In IEEE Vis 2005
pdf
Evaluation of fiber clustering methods for diffusion tensor imaging Moberts, B.; Vilanova, A.; van Wijk, J.J., In IEEE Vis 2005.
pdf
Gridbased spectral fiber clustering. Jan Klein ; Philip Bittihn ; Peter Ledochowitsch ; Horst K. Hahn ; Olaf Konrad ; Jan Rexilius ; HeinzOtto Peitgen; Proc. SPIE 6509, Medical Imaging 2007: Visualization and ImageGuided Procedures, 65091E, 2007.
A Comparison
of the Perceptual Benefits of Linear Perspective and PhysicallyBased
Illumination for Display of Dense 3D Streamtubes. Chris Weigle and David C. Banks, In IEEE Vis 2008.
A Novel Interface for Interactive Exploration of DTI Fibers
Chen et al. IEEE Transactions on Visualization and Computer Graphics. 15(6): pp 1433  1440, 2009.
Global
Illumination of White Matter Fibers from DTMRI Data David C. Banks and CarlFredrik Westin pdf
Feature Extraction for DWMRI Visualization: The State of the Art and Beyond Thomas Schultz
TrackVis trackvis.org

posted May 6, 2015, 10:05 AM by TengYok Lee
[
updated May 6, 2015, 10:08 AM
]
I am applying spectral clustering for graph partitioning, but ARPACK can fail to converge to compute the eigenvectors of the smallest 20 eigenvalues. Then I found the following tutorials/papers about the convergence issue: On the convergence of spectral clustering on random samples: the normalized case. von Luxburg, U., Bousquet, O., and Belkin, M. In Proceedings of the 17th Annual Conference on Learning Theory (COLT) (pp. 457 – 471). 2004. pdf Limits of spectral clustering. von Luxburg, U., Bousquet, O., and Belkin, M. Advances in Neural Information Processing Systems (NIPS) 17 (pp. 857 – 864), 2005. pdf A Tutorial on Spectral ClusteringUlrike von Luxburg 2007
arXiv:0711.0189One important thing I learn is that from the tutorial on spectral cluster: We have to make sure that the eigenvalues of L corresponding to the eigenvectors used in unnormalized spectral clustering are significantly smaller than the minimal degree in the graph.
More detail are in the tutorial. It also introduces 2 ways to normalize. 
posted May 2, 2015, 4:10 PM by TengYok Lee
[
updated May 13, 2015, 8:41 PM
]
Authors/Book 
Note

Visualization in Medicine: Theory, Algorithms, and ApplicationsBernhard Preim, Dirk Bartz ( Google)  
The Visualization Handbook
Charles D. Hansen, Chris R. Johnson
Academic Press, 2005
(Google) 

Introduction to Scientific Visualization
Helen Wright
Springer Science & Business Media, 2007
(Google)


Real Time Volume Graphics
K. Engel, M. Hadwiger, J. Kniss, C. Rezk, Salama, and D. Weiskopf
A.K. Peters, 2006
(Google)

This book is based on the author's tutorials for SIGGRAPH.

Isosurfaces: Geometry, Topology, and Algorithms
Rephael Wenger
CRC Press, Jun 24, 2013
(Google)

The author's publication page provides a sample chapter.

Flow visualization, 2nd Edition. Wolgang Merzkirch Academic Press, 1987 (Google, Elseiver)
 
Scientific Visualization: Techniques and Applications
K.W. Brodlie, L.A. Carpenter, R.A. Earnshaw, J.R. Gallop, R.J. Hubbold, A.M. Mumford, C.D. Osland, P. Quarendon
Springer Science & Business Media, 1992
(Google, Springer)

This book is based on a set of documents written for the AGOCG Workshop in UK, 1991.

An Introductory Guide to Scientific
Visualization
Rae Earnshaw, Norman Wiseman.
Springer Science & Business Media, 1992
(Google, Springer)



posted Sep 30, 2014, 5:05 PM by TengYok Lee
[
updated Jun 9, 2015, 4:49 PM
]
Real neural network http://nxxcxx.github.io/NeuralNetwork/
VisNEST  Visualization of Simulated Neural Brain Activity http://www.jara.org/en/research/jarahpc/research/details/csgimmersivevis/visnestvisualizationofsimulatedneuralbrainactivity/
Artificial neural network Opening the black box  data driven visualization of neural networksF.Y. Tzeng and KwanLiu Ma Proceedings of IEEE Visualization 2005, pp.383,390, 2328 Oct. 2005 pdf
Visualization of Artificial Neural Network with WebGL MARKUS SPRUNCK http://www.swengineeringcandies.com/blog1/experimentalvisualizationofartificialneuralnetworkwithwebgl
Visualizing and Understanding Convolutional Networks Matthew D. Zeiler and Rob Fergus In ECCV 2014
pdf
Understanding Deep Image Representations by Inverting Them Aravindh Mahendran, and Andrea Vedaldi
In CVPR 2015.
pdf
MiscInviso: Visualization Hadoop performance by Netflix. https://github.com/Netflix/inviso Visualizing MNIST: An Exploration of Dimensionality Reduction https://colah.github.io/posts/201410VisualizingMNIST/

posted Sep 30, 2014, 8:29 AM by TengYok Lee
[
updated Sep 30, 2014, 8:29 AM
]
These papers are the suggestion toread list of the Coursera course Discrete Inference and Learning in Artificial Vision (instructors: Nikos Paragios and Pawan Kumar) https://www.coursera.org/course/artificialvision Authors  Title  Book
 Chaohui Wang, Nikos Komodakis, Nikos Paragios  Markov Random Field
modeling, inference & learning in computer vision & image understanding:
A survey  Computer Vision and Image Understanding 117(11): 16101627
(2013)  Yuri Boykov and Vladimir Kolmogorov  An Experimental Comparison of
MinCut/MaxFlow Algorithms for Energy Minimization in Vision  IEEE
Transactions on Pattern Analysis and Machine Intelligence,
26(9):11241137 (2004)  Yuri Boykov and Olga Veksler  Graph Cuts in Vision and Graphics: Theories and Applications  Handbook of Mathematical Models in Computer Vision, edited by Nikos Paragios, Yunmei Chen and Olivier Faugeras. Springer, 2006  Yuri Boykov, Olga Veksler and Ramin Zabih  Fast Approximate Energy
Minimization via Graph Cuts  IEEE Transactions on Pattern Analysis
and Machine Inteligence, 23(11): 12221239 (2001)  Vladimir Kolmogorov  Convergent Treereweighted Message Passing for
Energy Minimization  IEEE Transactions on Pattern Analysis and
Machine Intelligence 28(10): 15681583 (2006)  Vladimir Kolmogorov and Ramin Zabih  What Energy Functions can be
Minimized via Graph Cuts?  IEEE Transactions on Pattern Analysis and
Machine Inteligence, 26(2): 147159 (2004)  Nikos Komodakis, Georgios Tziritas, Nikos Paragios  Performance
vs computational efficiency for optimizing single and dynamic MRFs:
Setting the state of the art with primaldual strategies  Computer
Vision and Image Understanding 112(1), 1429 (2008)  Nikos Komodakis, Nikos Paragios, Georgios Tziritas  MRF Energy Minimization and Beyond via Dual Decomposition  IEEE Transactions on Pattern Analysis Machine Intelligence 33(3): 531552 (2011)  Ben Taskar, Carlos Guestrin and Daphne Koller  MaxMargin Markov
Networks  Proceedings of Advances in Neural Information Processing
Systems: (2003)  Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann and Yasmine
Altun  Large Margin Methods for Structured and Interdependent Output
Variables  Journal of Machine Learning Research, 6:14531484 (2005)  Tomas Werner  Revisiting the Linear Programming Relaxation Approach to
Gibbs Energy Minimization and Weighted Constraint Satisfaction  IEEE
Transactions on Pattern Analysis and Machine Intelligence 32(8): 14741488 (2010) 

posted Sep 29, 2014, 9:42 PM by TengYok Lee
[
updated Oct 3, 2014, 7:05 PM
]
Explorable images for visualizing volume data.
Anna Tikhonova, Carlos D. Correa, KwanLiu Ma. In Proceedings of IEEE Pacific Visualization Symposium 2010, pp. 177184, 2010. An Exploratory Technique for Coherent Visualization of Timevarying Volume Data.Anna Tikhonova, Carlos D. Correa, KwanLiu Ma. Computer Graphics Forum, 29(3):783792, 2010. pdf Visualization by Proxy: A Novel Framework for Deferred Interaction with Volume Data. Anna Tikhonova, Carlos D. Correa, KwanLiu Ma. IEEE Transactions on Visualization and Computer Graphics, 16(6):15511559, 2010.
An Imagebased Approach to Extreme Scale In Situ Visualization and Analysis. James Ahrens, John Patchett, Sebastien Jourdain, David H. Rogers, Patrick O’Leary, and Mark Petersen. SuperComputing 2014, to appear.
PS. Another set of relevant paper is to evaluate the visibility of a single view point:
VisibilityDriven Transfer Functions. Carlos D. Correa, KwanLiu Ma. In Proceedings of IEEE Pacific Visualization Symposium 2009, 2009.
Visibility Histograms and VisibilityDriven Transfer Functions. Carlos D. Correa, KwanLiu Ma. IEEE Transactions on Visualization and Computer Graphics, 17(2):192204, 2011.

posted Sep 27, 2014, 10:12 AM by TengYok Lee
An article to refresh my memory on statistics? Motulsky HJ, Common Misconceptions about Data Analysis and Statistics. J Pharmacol Exp Ther. 351(1):2005, 2014 Oct. doi: 10.1124/jpet.114.219170. http://jpet.aspetjournals.org/content/351/1/200.full.pdf 
posted Sep 7, 2014, 11:07 PM by TengYok Lee
[
updated Sep 24, 2014, 5:25 PM
]
REF: http://ieeevis.org/year/2014/info/overviewamptopics/acceptedpapers
SciVisCharacterizing Molecular Interactions in Chemical Systems David Guenther, Roberto Alvarez Boto, Julia Contreras Garcia, JeanPhilip Piquemal, Julien Tierny Vortex Cores of Inertial ParticlesTobias Günther, Holger Theisel AdvectionBased Sparse Data Management for Visualizing Unsteady FlowHanqi Guo, Jiang Zhang, Richen Liu, Lu Liu, Xiaoru Yuan, Jian Huang, Xiangfei Meng, Jingshan Pan FLDA: Latent Dirichlet Allocation Based Unsteady Flow Analysis Fan Hong, Chufan Lai, Hanqi Guo, Xiaoru Yuan, Enya Shen, Sikun Li FixedRate Compressed FloatingPoint Arrays Peter Lindstrom Sparse PDF Volumes for Consistent MultiResolution Volume Rendering Ronell Sicat, Jens Krueger, Torsten Möller, Markus Hadwiger
preprint
TVCG InterpolationBased Pathline Tracing in ParticleBased Flow Visualization Jennifer Chandler, Harald Obermaier, Ken Joy

posted Jun 9, 2014, 10:32 PM by TengYok Lee
[
updated Jun 10, 2014, 8:59 AM
]
REF: http://www.holehouse.org/mlclass/06_Logistic_Regression.html
Because it took me a while to finally derive it, I decide to put the detail here. Since I cannot type the equation nice, I simplify the notations.
Note 1: Gradients of logistic cost functions
Here the cost function is denoted as F(t) where t terms for theta. m is the number of samples. (x^{(i)}, y^{(i)}), i = 1 ... m, is the training set. h_{t}(x) =1/(1+exp(t^{T}x)) is the logistic function with parameter theta (t).
F ( t ) = 1/ m sum _{i = 1 ... m} y ^{(i)} log h _{t}( x ^{(i)}) + (1  y ^{(i)})log (1  h _{t}( x ^{(i)}))
The partial gradient w.tr.t t_{j} is denoted as d/d t_{j} = d _{j}. Then the partial gradient at t _{j}, aka, d _{j} F( t) is derived as follows: d _{j} F ( t )
= 1/ m sum _{i = 1 ... m} y ^{(i)} / h _{t}( x ^{(i)}) d _{j} h _{t}( x ^{(i)}) + (1  y ^{(i)})/(1  h _{t}( x ^{(i)})) d _{j} (1  h _{t}( x ^{(i)}))
= 1/ m sum _{i = 1 ... m} y ^{(i)} / h _{t}( x ^{(i)}) d _{j} h _{t}( x ^{(i)})  (1  y ^{(i)})/(1  h _{t}( x ^{(i)})) d _{j} h _{t}( x ^{(i)}) < Remove the constant 1 , which is in boldface above.
= 1/ m sum _{i = 1 ... m} d _{j} h _{t}( x ^{(i)}) { y ^{(i)} / h _{t}( x ^{(i)})  (1  y ^{(i)})/(1  h _{t}( x ^{(i)}))} < Separate d _{j} h _{t}( x ^{(i)}) from both terms .
= 1/ m sum _{i = 1 ... m} d _{j} h _{t}(x ^{(i)}) { y ^{(i)} (1  h _{t}( x ^{(i)}))  h _{t}( x ^{(i)})(1  y ^{(i)})}/{ h _{t}( x ^{(i)})(1  h _{t}( x ^{(i)})}
= 1/ m sum _{i = 1 ... m} d _{j} h _{t}(x ^{(i)}) ( y ^{(i)}  h _{t}( x ^{(i)}))/{ h _{t}( x ^{(i)})(1  h _{t}( x ^{(i)})}
where
d _{j} h _{t}( x ^{(i)})
= d _{j} (1 + exp( t ^{T}x ^{(i)})) ^{1}
= (1 + exp( t ^{T}x ^{(i)})) ^{2} exp( t ^{T}x ^{(i)}) x ^{(i)}_{j}
= {1/(1 + exp( t ^{T}x ^{(i)})} {exp( t ^{T}x ^{(i)}) / (1 + exp( t ^{T}x ^{(i)})} x ^{(i)}_{j}= { h _{t}( x ^{(i)})(1  h _{t}( x ^{(i)})} x ^{(i)}_{j}Thus d _{j} F ( t )
= 1/ m sum _{i = 1 ... m} d _{j} h _{t}( x ^{(i)}) ( y ^{(i)}  h _{t}( x ^{(i)}))/{ h _{t}( x ^{(i)})(1  h _{t}( x ^{(i)})}
= 1/ m sum _{i = 1 ... m} ( y ^{(i)}  h _{t}( x ^{(i)})) {{ h _{t}( x ^{(i)})(1  h _{t}( x ^{(i)})} x ^{(i)}_{j}}/{ h _{t}( x ^{(i)})(1  h _{t}( x ^{(i)})}
= 1/ m sum _{i = 1 ... m} ( h _{t}( x ^{(i)})  y ^{(i)}) x ^{(i)}_{j}
Note 2: How is logistic regression related to MLE?Actually the logistic regression cost can be treated as the likelihood of a Bernoulli random variable. Here P[ x; t] = h_{t}( x) is the probability that x belongs to class 0. Then the pdf of x is f ( x ) = P [ x ; t ] ^{y} (1  P [ x ; t ]) ^{1  y}
and its likelihood of theta t is:
L ( t ) = log f ( x ) = y log P [ x ; t ] + (1 y ) log 1  P [ x ; t ] = y log h _{t}( x ) + (1 y ) log 1  h _{t}( x )
That's why optimizing the cost function F is equivalent to find the Maximum Likelihood Estimator.

