<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<!--}}}-->
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
/***
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
***/
/*{{{*/
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
/*}}}*/
/*{{{*/
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox 1.5.0.2 where print preview displays the noscript content */
}
/*}}}*/
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
<!--}}}-->
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

----
Also see [[AdvancedOptions]]
<<importTiddlers>>
These are links to the notes from the most recent quarter a course was taught and to the full notes from past quarters (potentially out-of-date):
!!!!Past courses
*[[VSFX 424: Digital Visual Effects II|VSFX 424]] &mdash; from Fall 2010 at SCAD
*[[TECH 311: Digital Materials and Textures|TECH 311]] &mdash; from Spring 2011 at SCAD
*[[TECH 312: Advanced Application Scripting|TECH 312]] &mdash; from Spring 2011 at SCAD
*[[VSFX 350: Procedural Modeling and Animation|VSFX 350]] &mdash; from Spring 2011 at SCAD
*[[VSFX 360: Stereoscopic Imaging|VSFX 360]] &mdash; from Spring 2011 at SCAD
!!!!Past quarters at Savannah College of Art and Design, School of Film, Digital Media and Performing Arts, Visual Effects Department
*[[Fall 2010|2010-Fall-SCAD.html]]
*[[Winter 2011|2011-Winter-SCAD.html]]
*[[Spring 2011|2011-Spring-SCAD.html]]
Hello there,

Brain kibble has moved. It&rsquo;s [[new home is on the blog|http://www.kennethahuff.com/blog/category/brain-kibble/]]&hellip;

I won&rsquo;t be adding anything below, but may update a link if I notice it has changed. Otherwise, new kibble will be served up on the blog.

&mdash; Ken
29 May 2012, Singapore

-----

Food for the brain. Feed the brain. The most recent kibble is near the top. Mostly. //See also// [[Visual resources]]

-----

[[Make good art.|https://vimeo.com/42372767]] &mdash; Yeah. What he said.

[[Nice to see a girl putting handles on cups instead of knocking them off|http://vimeo.com/40143224]] and [[the rabbit is a fur-coated, warm-blooded animal|http://vimeo.com/41420220]]. Dated gems of instructional filmmaking.

[[Particles|http://www.spoon-tamago.com/2012/05/10/tokyo-hotaru-led-lights-sumida-river/]]

He&rsquo;s back! &mdash; //[[An invocation for beginnings|http://ashow.zefrank.com/]]//

OMG! //[[A giant bubble machine!|http://www.triangulationblog.com/2012/03/bubble-device-by-nicholas-hanna.html]]// by [[Nicholas Hanna|http://www.nicholashanna.net/]]

[[Air|http://hint.fm/wind/]] and [[water|http://svs.gsfc.nasa.gov/vis/a000000/a003800/a003827/]]. Earth and fire, anyone?

[[Sleepy?|http://improveverywhere.com/2012/03/19/the-sleeper-car/]]

[[A lovely meditation on the work of an artist creating ceramic vessels and sculptures.|http://www.sueparaskeva.co.uk/video/]]

[[I am smaller than a tiny speck.|http://media.skysurvey.org/interactive360/index.html]]

[[Now I know why I never can find a tape measure when I need one.|http://www.todayandtomorrow.net/2012/01/11/tape-recorders/]]

//Exp&eacute;rimentations enflamm&eacute;es// &mdash; [[painting with fire (light)|http://www.flickr.com/photos/tomlacoste/sets/72157626947212088/]] and more [[here|http://www.flickr.com/photos/tomlacoste/]].

[[Well worth a watch (for creatives and presenters).|http://www.youtube.com/watch?v=78ARBe2JCXw]]

[[This is my kind of rigid body dynamics with fracturing.|http://vimeo.com/15139298]] More [[here|http://oppositionart.com/]].

[[Looking and seeing differently]]

[[Let go my Lego.|http://tiltedtwister.com/]]

&ldquo;[[The marvels of daily life are exciting;|http://onlinebrowsing.blogspot.com/2011/10/robert-doisneau-marvels-of-daily-life.html]] no movie director can arrange the unexpected that you find in the street.&rdquo; &mdash; [[Robert Doisneau|http://en.wikipedia.org/wiki/Robert_Doisneau]]

&ldquo;A good theory or idea is one where you don't assume more than you have to.&rdquo; &mdash; [[John Kostick|http://www.jjkostick.com/John_Kostick/Stars.html]] ([[Here is a lovely video about one of his bodies of work.|http://vimeo.com/32217320]])

[[Simulate this.|http://www.youtube.com/watch?v=hzrwG0dNhDE]]

How to: [[Bunnies|http://www.guardian.co.uk/books/gallery/2011/aug/02/how-to-draw-bunnies-simone-lia#/]] and [[Penguins|http://www.guardian.co.uk/childrens-books-site/gallery/2011/jun/27/how-to-draw-penguins-oliver-jeffers]].

[[Bubbles and ferrofluids! At the same time! And macro!|http://thisiscolossal.com/2011/08/compressed-2-pulsating-magnetic-ferro-fluids-and-soap-bubbles/]]

[[Enjoying being part of the 3.15%.|http://www.peoplemov.in/]]

[[Best rapid-prototyping set-up ever.|http://thisiscolossal.com/2011/06/markus-kayser-builds-a-solar-powered-3d-printer-that-prints-glass-from-sand-and-a-sun-powered-laser-cutter/]]

[[Waxing, waning and wobbling.|http://svs.gsfc.nasa.gov/vis/a000000/a003800/a003810/]]

[[Gelatinous life|http://www.youtube.com/watch?v=3HzFiQFFQYw]]

&ldquo;[[He was pretty good, that guy. It was the first time I didn’t call the police.|http://www.washingtonpost.com/wp-dyn/content/article/2007/04/04/AR2007040401721.html]]&rdquo;

[[How to disappear completely.|http://thefoxisblack.com/2011/06/01/liu-bolin-teaches-us-how-to-disappear-completely/]]

[[Hot bodies have less drag.|http://www.thephotonist.net/2011/05/hot-bodies-have-less-drag/]] (Made you look.)

[[This has all colors of trouble printed all over it.|http://www.popsci.com/gadgets/article/2011-05/2011-invention-awards-magic-wand-printing]]

Nine glorious minutes of [[starlings flocking.|http://vimeo.com/6434925]]

[[Roiling clouds and spinning stars.|http://vimeo.com/23205323]]

If you are going to start an avalanche(!), you should have [[a means of escape.|http://vimeo.com/22669590]] (Hunter, don&rsquo;t try this.)

&ldquo;[[Nice. Wow. This is cool.|http://vimeo.com/15091562]]&rdquo;

I must say that I prefer {{{06902 33797 30026 07243 90700 18295 81471 45296 66417 46047}}} to {{{10097 32533 76520 13586 34673 54876 80959 09117 39292 74945}}}. But that may just be me. (//[[via|http://dataisnature.com/?p=607]]//)

//[[Grimpoteuthis bathynectes|http://www.wired.com/wiredscience/2011/05/winged-octopus-video/]]//

[[A journey into the competitive world of free flight duration aeronautics.|http://floatdocumentary.com/]]

[[Magnetic resonance images of veg and fruits.|http://insideinsides.blogspot.com/]]

Mommy, where do candy bars come from? [[Powered Belt Chicanes and Aligners/Partial Product Rejectors|http://www.youtube.com/watch?v=KpkC4GmBsW8]]. (But what I don&rsquo;t understand is why two different kinds of candy bars are going into the same wrappers. What, you don&rsquo;t have a machine for that problem?)

[[A lovely visualization of water usage in the United States.|http://sansumbrella.com/works/2011/drawing-water/]]

[[I feel like someone is watching me.|http://aquarium.ucsd.edu/blog/2011/04/17/amazing-wolf-eel-egg-photos/]]

So, if [[this thing|http://apod.nasa.gov/apod/ap110502.html]] is around twice the size of Earth, I want to see detailed close-ups.

I love that I live in a world where [[things like this are created.|http://thisiscolossal.com/2011/05/mtv-balloons/]]

Back in outer space&hellip;[[amateur astrophotographers unwittingly help scientists track comet|http://www.wired.com/wiredscience/2011/04/astrophoto-comet/]].

Ze Frank on [[ideas and brain crack|http://www.zefrank.com/theshow/archives/2006/07/071106.html]]. --I miss //The Show//.-- UPDATE: He&rsquo;s back! &mdash; //[[An invocation for beginnings|http://ashow.zefrank.com/]]//

[[APOD|http://apod.nasa.gov/apod/]] has a lovely sequence of photographs of recent [[aurora|http://apod.nasa.gov/apod/ap110325.html]] activity.

I want [[these|http://ccsl.mae.cornell.edu/ornithopter]] and [[these|http://senseable.mit.edu/flyfire/]] to breed.

I have been thinking that I should write a manifesto. [[Found some instructions.|http://www.kimmok.com/514799/THE-MANIFESTO-MANIFESTO]]

[[Relief of gastrointestinal obstruction of a green turtle.|http://www.wired.com/wiredscience/2011/03/sea-turtle-plastic/]] &ldquo;Someday, perhaps, humanity might quit throwing away plastic altogether.&rdquo;

If I had a fortune, [[this is how I would lose it.|http://www.inventables.com/]]

A [[nice example|http://thisiscolossal.com/2011/02/christian-stoll/]] of [[forced perspective|http://en.wikipedia.org/wiki/Forced_perspective]].

&ldquo;Take a deep breath. Even if the air looks clear, it’s nearly certain that you&rsquo;ll inhale tens of millions of solid particles and liquid droplets.&rdquo; ([[link|http://earthobservatory.nasa.gov/Features/Aerosols/page1.php]])

I am fascinated by information visualization. The //[[information aesthetics|http://infosthetics.com/]]// blog is an excellent resource. See also [[this assignment|TECH 312: Data parsing assignment]] from my [[scripting class.|TECH 312]]

[[Macroscopic, sound-manipulated, fluid dynamic sculptures.|http://www.designboom.com/weblog/cat/10/view/11774/dentsu-paint-sound-sculptures.html]]

[[Unbelievable flying objects|http://www.wright-brothers.org/History_Wing/Aviations_Attic/UFOs/UFOs.htm]]. Things were so much more interesting before we really knew how.

[[Best bubble reference ever.|http://www.youtube.com/watch?v=3i-zYdOPG2k]] (Thank you, Sam.)

[[Privacy, please.|http://www.niklasroy.com/project/88/my-little-piece-of-privacy]]

[[Improv Everywhere|http://improveverywhere.com/]] always makes me smile.

[[TEDTalks|http://www.ted.com/]]

[[POP!Tech|http://www.poptech.org/]] &mdash; You cannot beat Vanessa German&rsquo;s [[way of opening a presentation|http://www.poptech.org/popcasts/vanessa_german__poptech_2007]]

//[[Radio Lab|http://www.radiolab.org/]]// &mdash; Must listen with headphones.

[[The news release archive of the Hubble Space Telescope.|http://hubblesite.org/newscenter/archive/releases/]]

[[Images from the High Resolution Imaging Science Experiment|http://hirise.lpl.arizona.edu/katalogos.php]] &mdash; --16,412-- --17,467-- 17,929 ultra-high resolution images of Mars. &mdash; //It is much too easy for me to get lost in these images.// &mdash; Can you find //[[Opportunity|http://marsrover.nasa.gov/home/index.html]]// in [[this recent image|http://hirise.lpl.arizona.edu/releases/images/ESP_021536_1780_no-label.jpg]]? [[Answer.|http://hirise.lpl.arizona.edu/releases/oppy-sm-color.php]]

[[Never miss another eclipse.|http://eclipse.gsfc.nasa.gov/eclipse.html]] Let&rsquo;s just say I missed one and I was not happy about it.

Bill Rankin&rsquo;s //[[Chicago Boundries|http://www.radicalcartography.net/index.html?chicagodots]]// and Eric Fischer&rsquo;s [[continuation of the idea|http://www.flickr.com/photos/walkingsf/sets/72157624812674967/]].

[[*|http://gnomebomb.tumblr.com/]]
''UPDATE (29 May 2011):'' Below, I added the recipe for the bubble mix that was used today.

----

A student wrote:

//Hello Professor,

I was wondering, what were the exact materials you used to create the giant bubble wands, and what were the exact ingredients, and proportions, did you use to make the bubble solution? Thanks for letting me know!

&mdash; Aspiring Bubble Maker (name withheld to protect the innocent)//

----

Exact? Huh?!? What is this? Science? Hmmm.

!!!!Wands
*From home improvement center
**Two dowel rods
**Eye hooks
**Big bucket
*Safety pins for attaching rope to eye hooks (from my personal collection)
*1/4" cotton welt cord from fabric store
!!!!Solution ingredients
*7th Generation Free and Clear Dishwashing Detergent (probably any would work, but all the references say //not to use// an &ldquo;ultra&rdquo; detergent)
*Glycerine ([[Brighter Day|http://www.brighterdayfoods.com/]] near Forsyth Park in Savannah sells it in the beauty section in big bottles)
*Water (distilled recommended)

The first batch of solution I made was
*2/3 cup soap
*1 gallons water
*3 tablespoons glycerine
I liked it. Made good bubbles. Had fun.

Second batch
*37 oz soap (was a full bottle, hence the odd number)
*10 oz glycerine
*3 gallons of water
The second batch had &ldquo;heavier&rdquo; bubbles. They wanted to head toward the ground. Seemed a bit harder to make huge bubbles, but there definitely were some big ones. Some very cool things happened when the bubble lasted awhile and then popped.

The third batch (29 May 2011)
*36 oz soap
*10 oz glycerine
*6 gallons of distilled water
It was a very windy day for the third batch, so it was hard to tell what effect the new ratio had. It also was fairly hot and dry. I think we would have ended up with more larger bubbles if it had not been for the gusty wind.

References say that you can use corn syrup instead of glycerine. I will be trying that with the next big batch. I also want to try lowering the glycerine proportion. Or maybe make a big batch and slowly add glycerine, ounce-by-ounce, to see its impact.

Here is [[a page with more recipes|http://bubbleblowers.com/homemade.html]]. You will see that the ingredient proportions are all over map. Here is [[another page with some recommendations|http://www.wetrock.com/BBM/bbm.html]].

Have fun.

&mdash; K.

P.S. To save you having to look up the conversions:
*1 gallon = 128 oz = 16 cups
*1 cup = 8 oz = 16 tablespoons
*1 tablespoon = 0.5 oz
I could just rewrite all of the recipes with ratios, but a bit of unit conversion never hurt anyone&hellip;
Professor Huff presents a startling new discovery at a famous Neolithic site.

[img[Discovery (and it is not a My Little Pony)|inclusions-2010-fall/dmc-discovery-2010-10.jpg]]

Some fun with Python, Houdini and --a not-so-secret ingredient-- [[Photosynth|http://photosynth.net/]].

Portions of the presentation will be stereoscopic. Persons with red-blue allergies are hereby cautioned.

Wednesday, 27 October 2010, 8:00 p.m.
Digital Media Club
Montgomery 211

----

Here are some technical notes and resources from the presentation.

The mystery guest in the image above is Kermit the Frog riding a zebra pi&ntilde;ata (?!?!). [[Here is the original Photosynth page.|http://photosynth.net/view.aspx?cid=c08c0eee-6c1a-47f0-9680-794f0702f251]] Kermit appears in a pointcloud derived from a Photosynth of Stonehenge created by //National Geographic// ([[original Photosynth|http://photosynth.net/view.aspx?cid=e5c7e730-95a3-4a29-a38e-d0d23223844e]]). In the presentation, I demonstrated a workflow for extracting the pointcloud data from the Photosynth and incorporating that data in a Houdini scene.

As a first pass, I used a slightly-modified version of [[this script|http://binarymillenium.com/2008/08/photosynth-export-process-tutorial.html]] to generate Houdini .chan files from binary data files extracted from the Photosynth pages. Those .chan files were brought into a ~CHOPs network in Houdini and used to generate points in a SOP network.

Improving on that workflow, I created a custom Python SOP which reads the binary Photosynth files directly.

See [[Python: Bit-wise manipulations]] for some information about the process used to extract color information from the Photosynth files.

!!!!Links
*[[Photosynth|http://photosynth.net/]]
*[[Photosynth TED Talk (2007)|http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html]]
*[[Photo Tourism research site|http://phototour.cs.washington.edu/]]
*[[Original Python script for translating Photosynth .bin files to comma-separated text files|http://binarymillenium.com/2008/08/photosynth-export-process-tutorial.html]]
*[[cURL|http://curl.haxx.se/]] &mdash; I used {{{curl}}} to download the .bin files from the Photosynth site. Try {{{which curl}}} at the command line to discover if //curl// is installed on your system (assuming a Linux/OS X workstation; you know how I feel about that other platform).

!!!!Bespoke Python SOP
To make a new Python SOP that reads the original .bin files downloaded from the Photosynth site:
*Houdini > File menu > New Operator Type...
**//Operator Style:// Python Type
**//Network Type:// Geometry Operator
**Set the //Operator Name//, //Operator Label// and //Save To Library// values
*After accepting the //New Operator Type// window, in the newly-presented //Edit Operator Type Properties// window
**Add these parameters
***{{{source_file}}}, a //File// parameter
***{{{glob}}}, a //Toggle// parameter
**Add the code below to the //Code// tab
{{{
import os
import struct
import glob

def check_floats(input):
    for i in range(3):
        if not((abs(input[i]) > 1e-10) and (abs(input[i]) < 25)): return False
    return True

# ended up not needing the following byte-swapping function
def byte_swap_unsigned_short(us):
    a1 = (us >> 0) & 0xff
    a2 = (us >> 8) & 0xff
    return (a1 << 8 | a2 << 0)

# This code is called when instances of this SOP cook.
geo = hou.pwd().geometry()

# read the parameters of the node
source_path = hou.evalParm("source_file")
source_path = hou.expandString(source_path)
glob_flag = hou.evalParm("glob")

source_files = []
if glob_flag:
    source_files = glob.glob(source_path[:source_path.rfind('/')] + '/*.bin')
else:
    source_files.append(source_path)

# create point attributes
cd_attr = geo.findPointAttrib("Cd")
if not cd_attr:
    cd_attr = geo.addAttrib(hou.attribType.Point, "Cd", (0.0, 0.0, 0.0))

point_count = 0

for source_file in source_files:
    source_raw = open(source_file, 'rb')
    source_raw_size = os.path.getsize(source_file)
    record_size = 14
    record_count = (int(source_raw_size / record_size) - 1)
    initial_offset = source_raw_size - (record_count * record_size)
    # print "  offset = %d" % initial_offset
        
    source_raw.seek(initial_offset)
    
    # create the requested points
    for point_num in range(record_count):
        record = source_raw.read(record_size)
        record_unpacked = struct.unpack('>fffH', record)
    
        x = record_unpacked[0]
        y = record_unpacked[1]
        z = record_unpacked[2]
    
        if check_floats((x, y, z)):
            point = geo.createPoint()
        
            cd = record_unpacked[3]        
    
            r = float((cd >> 11) & 0x1f) / 32.0
            g = float((cd >> 5) & 0x3f) / 64.0
            b = float(cd & 0x1f) / 32.0
        
            point.setAttribValue(cd_attr, (r, g, b))
            point.setPosition((x, y, z))
            point_count += 1

    source_raw.close()

print "kah_photosynth_bin_reader: processed %d points" % point_count
}}}
/*{{{*/
Background: #fff
Foreground: #000
PrimaryPale: #888
PrimaryLight: #222
PrimaryMid: #002D66
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #002D66
TertiaryPale: #eee
TertiaryLight: #537AAD
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/*}}}*/
E-mail: [[ken@kennethahuff.com|mailto:ken@kennethahuff.com]]
[[Updates]]
[[Special topics]]
[I have added some [[bubble notes|Bubble notes]]. &mdash; Ken]

[img[Giant bubbles at Montgomery|inclusions-2010-fall/BigBubblesHere11amMonday.jpg]]

An extra bubble session is scheduled for Sunday, 29 May, 2:00 p.m. until we run out of bubble mix&hellip;

Take a break from working on projects and see if you can beat Leonard&rsquo;s bubble:

[img[Big bubble|inclusions-2010-fall/BigBubble-Leonard.jpg]]
Extra help sessions for Spring 2011:
*--Saturday, 30 April, 1:00 p.m.&ndash;3:00 p.m.--
*--Saturday, 7 May, 10:00 a.m.&ndash;1:00 p.m.--
*Saturday, 28 May, 10:00 a.m.&ndash;5:00 p.m.
I will be visiting students individually at their workstations in Montgomery Hall.

{{kManicule{&#9758;}}}&nbsp;&nbsp;''[[Sign up here.|http://www.kennethahuff.com/teaching/help.html]]''&nbsp;&nbsp;{{kManicule{&#9756;}}}

''Important:'' Do not add your name before 8:00 a.m. on the respective day; the page may clear automatically before then.

Be careful not to modify the extra help page beyond adding your name (if you need help) or removing your name (if you no longer require help).[[*|http://gnomebomb.tumblr.com/]]
Working with a current, recognizable photograph of yourself, prepare a head shot with the following specifications:
*300 pixels wide by 400 pixels tall
* JPEG
* Filename: {{{LastnameFirst.jpg}}} (example: {{{HuffKenneth.jpg}}})
Place the file at the //top level// of your SFDM drop box for the class (just inside the directory titled with your user name).

This file should be in place before the start of the second class and should remain in place for the entire quarter. This counts toward your overall exercise grade. Everyone should do this, even if we have had classes together in the past.
/***
|Name:|HideWhenPlugin|
|Description:|Allows conditional inclusion/exclusion in templates|
|Version:|3.1 ($Rev: 3919 $)|
|Date:|$Date: 2008-03-13 02:03:12 +1000 (Thu, 13 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#HideWhenPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
For use in ViewTemplate and EditTemplate. Example usage:
{{{<div macro="showWhenTagged Task">[[TaskToolbar]]</div>}}}
{{{<div macro="showWhen tiddler.modifier == 'BartSimpson'"><img src="bart.gif"/></div>}}}
***/
//{{{

window.hideWhenLastTest = false;

window.removeElementWhen = function(test,place) {
	window.hideWhenLastTest = test;
	if (test) {
		removeChildren(place);
		place.parentNode.removeChild(place);
	}
};


merge(config.macros,{

	hideWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( eval(paramString), place);
	}},

	showWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !eval(paramString), place);
	}},

	hideWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.tags.containsAll(params), place);
	}},

	showWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !tiddler.tags.containsAll(params), place);
	}},

	hideWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.tags.containsAny(params), place);
	}},

	showWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !tiddler.tags.containsAny(params), place);
	}},

	hideWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.tags.containsAll(params), place);
	}},

	showWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !tiddler.tags.containsAll(params), place);
	}},

	hideWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0]), place);
	}},

	showWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !(store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0])), place);
	}},

	hideWhenTitleIs: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.title == params[0], place);
	}},

	showWhenTitleIs: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.title != params[0], place);
	}},

	'else': { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !window.hideWhenLastTest, place);
	}}

});

//}}}
[This information is current as of //Houdini 10//.]

Houdini’s //hrender// command-line tool can be used for batch rendering locally. If you enter {{{hrender}}} in a Terminal, it will print out the options list. The //[[Rendering from the command line page|http://localhost:48626/rendering/commandline]]// of the documentation describes some other options. Note that //hrender// is a shell script located in {{{$HFS/bin}}} (so you could poke around at its innards if you were so inclined).

Here are some example uses of the //hrender// command. They assume that the current directory is the project directory containing the .hip file ({{{290DaysInOneMinute.hipnc}}}, in this case) and that the .hip contains a Mantra ROP (/out/mantra1). The scene was created with 1800 frames.

This first example will render frames 1–1800:
{{{
hrender -e -v -R -f 1 1800 -d /out/mantra1 290DaysInOneMinute.hipnc
}}}
The next two could be entered in two separate shells. Each will render every other frame, one starting at frame 1, the other at frame 2:
{{{
hrender -e -v -R -i 2 -f 1 1800 -d /out/mantra1 290DaysInOneMinute.hipnc
hrender -e -v -R -i 2 -f 2 1800 -d /out/mantra1 290DaysInOneMinute.hipnc
}}}
Depending on the rendering software and the complexity of your scene, it often is a better utilization of resources to have more than one rendering process going at the same time on the same workstation. Be cautious of memory usage however. You should avoid using memory swapping, if possible. The {{{top}}} command at the shell level, or a GUI system-level resource monitor can be useful in this regard.

This example will render every 100th frame, a good way to test an animation and to get a sense of the per-frame render time:
{{{
hrender -e -v -R -i 100 -f 1 1800 -d /out/mantra1 290DaysInOneMinute.hipnc
}}}
I use the commands as shown above. This sets up the hrender command to work in the foreground of the shell, meaning you have one rendering process per shell window. You then can press Control+c to kill the rendering. You also could do this:
{{{
hrender -e -v -R -f 1 1800 -d /out/mantra1 290DaysInOneMinute.hipnc &
}}}
The addition of the ampersand (&) at the end will fork the rendering process into the background and return you to the shell prompt. You then will have to use the {{{kill}}} command with the process id or the job number to kill the rendering process. Or, you could just close the Terminal window...

Be aware that if you kill a rendering process or if it aborts prematurely that you may end up with an incomplete image file for the curent frame being rendered. Watch out for images that have noticeably different files sizes from their neighbors.

If you have created your file in the non-commerical version of //Houdini//, you may need to rename your .hipnc file with a .hip extension. I suggest making a copy of the .hipnc:
{{{
cp someFilename.hipnc someFilename.hip
}}}
You then would use {{{someFilename.hip}}} for the //hrender// command.

This will not convert the file to the commerical version, but will let the //hrender// shell script recognize the file. You now have two copies of the file and may want to remove the second copy once the rendering is complete.
[ //This is a work-in-progress. I need to figure out the best way to share these files.// ]
*{{{ControllingScatterSOPWithPainting_001.hipnc}}} &mdash; A variation on the technique described in the ~SideEffects //Old School Blog// posts mentioned above. The main difference is that I use the term &ldquo;density&rdquo; rather than &ldquo;area&rdquo;.
*{{{CutOutStairs_001.hipnc}}} &mdash; An example using Add ~SOPs and a Cookie SOP to construct a simple stair structure. Fun with connecting the dots.
*{{{SpiralStaircase_001.hipnc}}} &mdash; A spiral staircase with railing. Pay particular attention to the use of the Group Geometry SOP. We will cover this in detail in class 5.
*{{{CorrugatedPanel_001.hipnc}}} &mdash; a basic Group Geometry SOP example.
*{{{FlaredRoof_001.hipnc}}} &mdash; The very simple beginnings of a flared roof line.
*{{{ProceduralBuildings_FootprintDeformation.hipnc}}} &mdash; A house with a procedural footprint.
*{{{VolumesOfCubes_001.hipnc}}} &mdash; Shows the use of a color ramp parameter and the corresponding //[[chramp()|http://localhost:48626/expressions/chramp]]// expression function; The [[Points From Volume SOP|http://localhost:48626/nodes/sop/pointsfromvolume]] used in this file is an example of a built-in operator that is a digital asset. Dive in and take a look.
*{{{Window_001.hipnc}}} &mdash; A wall with lots of configurable windows.
*{{{CatenaryArchWithMetaballs_001.hipnc}}} and {{{CatenaryCurves-Slides.pdf}}}&mdash; Some fun with catenary curves and metaballs.
**[[Metaball SOP documentation.|http://localhost:48626/nodes/sop/metaball]]
**//Wikipedia// pages for [[catenary curves|http://en.wikipedia.org/wiki/Catenary]]
**Euler&rsquo;s Number, //[[e|http://en.wikipedia.org/wiki/E_(mathematical_constant)]]// (Euler is pronounced, &ldquo;~OY-ler&rdquo;)
**[[The Upside Dome: Catenary curves as wireframe in an architectural sculpture installation.|http://www.gijsvanvaerenbergh.com/theupsidedome/]]
*{{{PieWedge_001.hipnc}}} &mdash; Contains an example of [[multiple-line HScript expressions.|Houdini: Multiple-line expressions]]
*{{{SurfaceDeformationBasedOnLuminance/}}} &mdash; This project contains a number of examples of techniques that can be used to deform a surface based on file-based images. It illustrates the use of the {{{tex()}}} expression function. It includes two examples of a ~GeoTIFF workflow and a VOP SOP to deform the surface. The VOP SOP version is much faster, allowing for the use of higher resolution images. The original ~GeoTIFF files were downloaded from the [[United States Geological Service|http://www.usgs.gov/]]. It also includes examples that use imported [[Digital Terrain Model|http://hirise.lpl.arizona.edu/dtm/]] data and false color altimetry imagery from the [[Mars HiRISE|http://hirise.lpl.arizona.edu/]] program. [[This blog post|http://hirise.lpl.arizona.edu/HiBlog/2010/01/20/first-pds-release-of-hirise-dtms/]] describes the ~HiRISE DTM data. I also included an example of exporting geometry using a ROP Output Driver SOP and then importing the geometry using a File SOP. This is one method which can be used to cache the results of a complex SOP network.
*{{{CHOPsExamples/}}} &mdash; contains a number of small example files for ~CHOPs based networks. //More to come as we progress in class.//
!!!!Custom Python ~SOPs
*{{{python_gis/}}} &mdash; An example of custom Python ~SOPs in Houdini. See the {{{00_README_from_Ken.txt}}} file for more information.
!!!!Parameter and expression sharing for class sessions
//A alternate method is used in environments where there is a closed network.//

An on-line collaborative system has been prepared for sharing //Houdini// parameter values and expression formulas. During sessions, parameter values will be published and be available in real time via a web browser.

[[Parameter sharing|http://www.kennethahuff.com/teaching/CollaborativeText.html?mode=view&mob=kah_0000]]

This page is cleared automatically after a few hours. You should copy the contents to a text file if you would like to keep the information for reference.

!!!!Documentation readings and links
Many of the links in these notes are to built-in //Houdini// documentation and will work only if //Houdini// currently is running on your workstation and you have accessed the Help system. (//Houdini// uses an embedded web server to manage documentation. That server is started when you first access the documentation.) """SideEffects""" also has published the //Houdini// [[documentation on the web|http://www.sidefx.com/index.php?option=com_content&task=view&id=1085&Itemid=281]] (in which case, example files will not be available).

If you look at the help or status bar of your web browser when you hover over a link and see {{{http://localhost:48626/}}} at the beginning of the URL, you are seeing a link to the built-in system.

As on-going research, whenever I come across a new operator, subsystem or expression function, I make it a habit to review its documentation. (Yes, I am one of //those// people who read the documentation.) There is a great deal of functionality nestled in the nooks and crannies of //Houindi//. Oh, and do not forget to take a look at the built-in examples as well.

!!!!Introducing yourself to Houdini
In the //[[Houdini Help|http://localhost:48626/]]// documentation, work through the //[[Interface intro|http://localhost:48626/start/intro]]// section and the //Tutorial videos// on the //[[Welcome to Houdini|http://localhost:48626/start/]]// page (7 videos). The //[[Maya to Houdini transition guide|http://localhost:48626/start/maya_transition]]// section also may be useful.

The final video, //[[Node based workflow,|http://www.sidefx.com/images/stories/blogs/houdini10_blog/NodeWorkflow/procedural_forest.mov]]// contains a demonstration of a procedural forest and is a great example touching on many aspects of using //Houdini//. I suggest that you hold off on turning your network into a digital asset until you have more experience with //Houdini//. The video covers this at around the 12 minute mark.
!!!!stamp() function clarification
When working through the //[[Node based workflow|http://www.sidefx.com/images/stories/blogs/houdini10_blog/NodeWorkflow/procedural_forest.mov]]// video, note that the {{{stamp()}}} function accepts strings for its first two arguments. Strings in HScript, one of Houdini&rsquo;s expression languages, are enclosed in double-quotation marks ({{{"}}}). In the video, some people mistake those quotation marks as asterisks ({{{*}}}). Therefore, a {{{stamp()}}} function might look like this
{{{
stamp("../copy1", "pointNumber", 0)
}}}
Not this
{{{
stamp(*../copy1*, *pointNumber*, 0)
}}}

Today&rsquo;s class was brought to you by the following """SOPs""" (Surface Operators): """AttribCreate""", Box, Copy, Grid, """LSystem""", Merge, Mountain, Paint, Scatter, Switch and Transform.

In the //[[Houdini Help|http://localhost:48626/]]// documentation, review the pages linked to in the “Getting started” section of the //[[Basics|http://localhost:48626/basics/]]// page. The //pscale// attribute that was added in order to vary the scale of the trees in the procedural forest is described briefly in //[[Instancing point attributes|http://localhost:48626/copy/instanceattrs]]// as part of the //[[Copying and instancing|http://localhost:48626/copy/]]// documentation. Related information can be found on the //[[Attributes|http://localhost:48626/model/attributes]]// page.

[[Here is a link to a post on the SideEffects forums|http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=6679&highlight=copy+sop]] regarding some of the more obscure and poorly-documented features of copying and instancing.

Here is an //Old School Blog// post about [[point instancing|http://www.sidefx.com/index.php?option=com_content&task=view&id=1050&Itemid=216]] and the new {{{instancepoint()}}} expression function (>= Houdini 9). Here is a tutorial by Peter Quint on [[instancing lights|http://vimeo.com/8681435]].

Also of interest is the //[[Edit Parameter Interface window|http://localhost:48626/ref/windows/edit_parameter_interface]]//.

You should review //[[Expression functions|http://localhost:48626/expressions/]]// and //[[Global expression variables|http://localhost:48626/expressions/_globals]]//.

[[A recent customer story|http://www.sidefx.com/index.php?option=com_content&task=view&id=1694&Itemid=68]] on the """SideEffects""" web site highlights the use of Houdini by Framestore in //Avatar.// Pay particular attention to the last six paragraphs. Procedural forests and Copy """SOPs""", oh my!

See the //Morphogenesis (and ~L-Systems)// section of  [[Proceduralism notes and resources|Proceduralism: Notes]] for additional references on ~L-Systems.

A [[recent video masterclass|http://www.sidefx.com/index.php?option=com_content&task=view&id=1810&Itemid=305]] from ~SideEffects gives a very good overview of Python in //Houdini// with emphasis on //Houdini// version 11.

Here is a tip for [[managing crashes and freezes in Houdini|Houdini: Managing crashes and freezes]].

On the ~SideEffects //Old School Blog,// there are three articles related to painting point/particle density: [[one|http://www.sidefx.com/index.php?option=com_content&task=view&id=1030&Itemid=216]], [[two|http://www.sidefx.com/index.php?option=com_content&task=view&id=1032&Itemid=216]] and [[three|http://www.sidefx.com/index.php?option=com_content&task=view&id=1040&Itemid=216]].

*[[Group specification patterns/strings|http://localhost:48626/model/groups#manual]] &mdash; A table which documents the syntax for the Group parameter available on many nodes. Note that you can have multiple group patterns in a single Group parameter. The final result will be the combination of all of the patterns, as interpreted from left to right in the parameter string.

''Resource:'' Graham Thompson&rsquo;s [[houdinitoolbox.com|http://www.houdinitoolbox.com/]] &mdash; This one is relatively new, but already there is a great deal of good material.

!!!!Surface normals in Houdini
New in //Houdini 11// is an option to create surface normal data as a vertex attribute. Previously, normals only could be stored as point attributes. The [[Vertex SOP|http://localhost:48626/nodes/sop/vertex]] is used to manipulate this vertex normal data. An important option on the SOP is //Cusp Normal// which can be used to generate normals based on the relative angles of adjacent faces.

This new feature will allow for the removal of many [[Facet SOPs|http://localhost:48626/nodes/sop/facet]] previously used with their //Unique Points// parameter and //Pre-/~Post-Compute Normals// parameters to generate explicit normals. Diffuse color, alpha and texture coordinates (~UVs) also can be stored on vertices. Each of these attributes can exist only on points or vertices. For example, you cannot have both point and vertex normals on the same geometry.

!!!!Look development
Here is a note on [[command line rendering with Houdini|Houdini: Command line rendering]].

Top-level documentation pages for look development: //[[Lighting|http://localhost:48626/light/]]//, //[[Shelf Tools: Lights and Cameras tab|http://localhost:48626/shelf/lightsandcameras]]//, //[[Shading|http://localhost:48626/shade/]]// and //[[Rendering|http://localhost:48626/rendering/]]//.

In preparation for look development in //Houdini//, you should be sprinkling your procedural building networks with [[Group SOPs|http://localhost:48626/nodes/sop/group]] that define primitive groups for later material assignment. Use {{{Window_001.hipnc}}} in _MATERIAL as an example.

cmiVFX has [[a video specifically about procedural buildings.|http://www.cmivfx.com/productpages/product.aspx?name=Houdini_Building_Generation]] While I have yet to view the video, I have been very satisfied with previous videos from the company.

Here are some notes on [[shader development, lighting, rendering and tuning of shading quality.|Houdini: Shading notes]] 

!!!!Houdini Digital Assets
Houdini Digital Assets (~HDAs) allow us to encapsulate networks into our own custom operators. In the documentation, review the //[[Digital assets|http://localhost:48626/assets/]]// page. Pay particular attention to the following subtopics:
*//[[Anatomy of a digital asset|http://localhost:48626/assets/anatomy]]//
*//[[Create a digital asset|http://localhost:48626/assets/create]]//
*//[[Create a user interface for an asset|http://localhost:48626/assets/asset_ui]]//
*//[[Load and manage assets on disk|http://localhost:48626/assets/install]]//

[[Famous Curves Index|http://www-history.mcs.st-and.ac.uk/Curves/Curves.html]] &mdash; The stories and formulas for some well-know curves.

!!!!Channel Operators (~CHOPs) references
Two starting points for ~CHOPs information in the documentaiton: //[[Channel nodes|http://localhost:48626/nodes/chop/]]// and //[[Motion view|http://localhost:48626/ref/views/chopview]]//.

The following nodes and functions were used in class:
*~SOPs: Channel, CHOP Network Manager and Null
*~CHOPs: Export, Geometry, Math, Merge, Noise, Null, Shift and Wave
*Expression functions: {{{chop()}}} and {{{chopn()}}} (be sure to look over the other {{{chop*}}} functions as well)
If you are interested in the sound-related features of ~CHOPs and //Houdini//, Andrew Lowell&rsquo;s electronic book, //[[Simultaneous Music, Animation and Sound Techniques with Houdini|http://www.andrew-lowell-productions.com/andrew-lowell-productions/resources.html]]// is an excellent resource.

If you are interested in music and/or sound visualization, you might enjoy the [[Create Digital Motion|http://createdigitalmotion.com/]] and [[Create Digital Music|http://createdigitalmusic.com/]] blogs. Lots of good stuff.

!!!!Random bits
I have added a note about [[multiple-line expressions in Houdini|Houdini: Multiple-line expressions]], an apparently undocumented feature.

I have mentioned that the .hip file format is a representation of a file hierarchy. The {{{hexpand}}} and {{{hcollapse}}} command-line tools can be used to split a .hip file into its constituent parts and to put a directory structure that represents a .hip file together as a single file. Execute each command without any arguments to see some basic usage. These commands only work with files created using a commercial license of //Houdini//. {{{otexpand}}} and {{{otcollapse}}} //~HScript// commands provide similar manipulation of operator type library (.otl) files.

The //Houdini// ~CHOPs/LED demonstration that I did in class utilized a controller from [[Phidgets|http://www.phidgets.com/]].

Of interest: [[Matt Ebb|http://mke3.net/]] has created a [[raytracer VOP SOP|http://mke3.net/weblog/raytracer-vopsop/]] and has posted some videos: [[one|http://vimeo.com/20700092]] and [[two|http://vimeo.com/22438117]]. [[Another video|http://vimeo.com/21436831]] with a link to the .hipnc file in the comments.
!!!!The Standards
*[[SideEffects Software|http://www.sidefx.com/]] &mdash; The makers of //Houdini//
**[[Forums|http://www.sidefx.com/index.php?option=com_forum]] &mdash; Very active //Houdini// forums
**[[Update journals|http://www.sidefx.com/index.php?option=com_journal&Itemid=213&page=index&journal=default]] &mdash; ~SideEffects posts nightly builds of Houdini, adding features and fixing bugs on an almost-daily basis. The builds can be downloaded [[here|http://www.sidefx.com/index.php?option=com_download&Itemid=208]], but you will need a log-in account. I tend to update once per week.
**[[Houdini Exchange|http://www.sidefx.com/index.php?option=com_wrapper&Itemid=8]] &mdash; A resource for download pre-built digital assets, models and scripts for //Houdini//
**[[Tutorials|http://www.sidefx.com/index.php?option=com_content&task=blogsection&id=14&Itemid=132]]
**[[Go Procedural User Guide|http://www.sidefx.com/index.php?option=com_content&task=view&id=1849&Itemid=66]] &mdash; Some new, introductory guides (videos and ~PDFs) to procedural techniques with //Houdini// (November 2010, //Houdini// 11).
* [[odforce|http://odforce.net/]] &mdash; A community web site for //Houdini//; very active forums and a Wiki
* Peter Quint has created an excellent set of //Houdini// tutorials. The [[videos are on Vimeo|http://vimeo.com/user2030228]] and [[the .hip files are here.|http://sites.google.com/site/pqhoudinitutorial/]] Mr. Quint is adding new videos on a regular basis.
*Graham Thompson&rsquo;s [[houdinitoolbox.com|http://www.houdinitoolbox.com/]] &mdash; This one is relatively new, but already there is a great deal of good material.
!!!!Python in Houdini
*A video [[masterclass|http://www.sidefx.com/index.php?option=com_content&task=view&id=1810&Itemid=305]] on Python in //Houdini// with emphasis on version 11. A very good overview of Python in //Houdini// by one of the software engineers responsible for the implementation. The third video describes changes and caveats related to Python in version 11. There are a number of other Python-related presentations on the Side Effects site. This presentation represents the most current information.
!!!!Ken&rsquo;s notes and tutorials
*[[Houdini: Introduction notes]]
*[[Houdini: Example files]]
*[[Managing crashes and freezes|Houdini: Managing crashes and freezes]] &mdash; //Houdini// typically crashes gracefully (saving a copy of the current scene automatically). Here is some information on that process and how to gracefully force a crash.
*[[Houdini: Command line rendering]] &mdash; Some examples of running //Houdini// rendering processes from the command line. Often this lets you more fully utilize the capacity of your workstation (or use multiple workstations, but I didn&rsquo;t tell you that).
*[[Codename: Stonehenge]] &mdash; Using a custom Python SOP in Houdini to read pointcloud data generated by [[Photosynth|http://photosynth.net/]].
*Some notes on [[shader development, lighting, rendering and tuning of shading quality.|Houdini: Shading notes]]
*[[Houdini: Multiple-line expressions]] &mdash; The syntax for multiple line ~HScript and Python parameter expressions in //Houdini//.
!!!!Managing //Houdini// crashes and freezes
When //Houdini// crashes, it normally saves a temporary file that can be used to recover the current state of scene at the time of the crash. If you launched //Houdini// from the command line, //Houdini// will print the name of the temporary .hip to the terminal window.

If you kill //Houdini// outright (using {{{xkill}}}, for example), the temporary file may not be saved.

If //Houdini// freezes, there is a method for safely killing the //Houdini// process, resulting in a temporary .hip in most cases.

In a terminal shell, use the following command
{{{
ps -e | grep 'houdini'
}}}
The first number in the resulting line printed to the terminal is the process id number for //Houdini.// If you have multiple sessions of //Houdini// running, there will be multiple lines printed. If the lines contains &ldquo;{{{sesinetd}}}&rdquo;, it represents the license server &mdash; you do not want to kill that process.

Substitute the appropriate process id number for &ldquo;pid&rdquo; in the following command:
{{{
kill -SEGV pid
}}}
For example:
{{{
kill -SEGV 345
}}}
This use of the {{{kill}}} command will cause a segmentation fault, resulting in a crash from which //Houdini// can save a temporary .hip.

Here is an example of the terminal output under ~RedHat Linux at SCAD:
{{{
khuff@localhost:~$ ps -e | grep 'houdini'
 5686 ?        00:00:02 houdini-bin
khuff@localhost:~$ 
}}}
In this case, the process id is //{{{5686}}}//, so you would
{{{
kill -SEGV 5686
}}}

//Houdini// on Mac OS X is configured differently. For a standalone license, the sample terminal output would be:
{{{
kah-mbp:ProceduralBuildings ken$ ps -e | grep 'houdini'
   51 ??         3:37.45 /Library/Frameworks/Houdini.framework/Versions/Current/Resources/houdini/sbin/sesinetd -D -l /Library/Logs/sesinetd.log -V 2
18020 ??        18:13.37 /Applications/Houdini 10.0.745/Houdini.app/Contents/MacOS/houdini
19125 ttys001    0:00.00 grep houdini
kah-mbp:ProceduralBuildings ken$
}}}
The process id is //18020//, so you would
{{{
kill -SEGV 18020
}}}
!!!!Python
[[Multiple-line expressions in Python are covered in the documentation.|http://localhost:48626/hom/expressions]] The rules are straightforward. If the expression is on a single line, it is evaluated as an expression. If it is on multiple lines, the code is evaluated as if it were the body of a function and you therefore need to use the {{{return}}} keyword to produce a result. Standard Python whitespace rules apply, but you do not need to indent the entire expression.

!!!!~HScript
When editing an expression in the Parameter pane, you cannot break an expression into multiple lines. If you use the //Edit Expression/Edit String// command in the right-mouse-button menu of a parameter (or press Alt+e), you can add whitespace, including new lines, to your expressions. This can help with legibility but does not affect the structure of the code.

To take this further, you can put curly braces around your expression and then break it into multiple statements. With this syntax, you can define temporary variables to store calculated results that are used repeatedly and to improve the legibility of your expressions at the same time. For example:
{{{
($CY == 0) || ($CY == ($NCY - 1)) || (($NCY % 2) && ($CY == int($NCY/2)))
}}}
becomes
{{{
{
    first = ($CY == 0);
    last = ($CY == ($NCY - 1));
    odd = ($NCY % 2);
    middle = ($CY == int($NCY/2));

    result = first || last || (odd && middle);

    return result;
}
}}}
and
{{{
`ch("cy") + " " + ifs(ch("cy") % 3 == 0, "Buzz ", "") + ifs(ch("cy") % 5 == 0, "Fizz", "")`
}}}
becomes
{{{
`{
    # calculate the bits
    num = ch("cy");
    string buzz = ifs(num % 3 == 0, "Buzz ", "");
    string fizz = ifs(num % 5 == 0, "Fizz", "");

    return (num + " " + buzz + fizz);
}`
}}}

Here are the rules that I have found (there may be more):
*Enclose the ~HScript code in curly braces.
*End each statement with a semi-colon.
*Scalar numerical variables (variables containing a single number) are created when used; for example, {{{first = ($CY == 0);}}} implicitly declares a scalar numerical variable, {{{first}}}.
*There is no syntactical sugar added to variable names (no {{{$}}}, for example); simply use the name (see the examples above).
*The datatype of string variables does need to be declared; for example {{{string foo = "hello";}}}.
*At the end of the code you have to return a value using the {{{return}}} keyword (see the examples above).
*If the datatype of the return value is a string, the entire expression, including the curly braces, should be enclosed in backticks (see the second example above).
*Comments start with the [[octothorpe|http://en.wikipedia.org/wiki/Number_sign]] ({{{#}}}) or two forward slashes ({{{//}}}) and continue to the end of the line

Here is an example that incorporates a vector:
{{{
{
    # this is how to declare/assign a vector
    vector foo = vector3(1.0, 2.0, 3.0);
    
    # replace one component of the vector
    foo = vector3(foo[0], 4.0, foo[2]);
    
    # print a component of the vector
    print('foo: ', foo[1]); # simply using foo would return the first component
    return foo.x;
}
}}}

!!!!Speed of multiple-line ~HScript expressions
Use the revamped Performance Monitor in //Houdini 12//, I did a very quick test of the time it takes to execute a single line ~HScript expression as compared to a multiple-line expression. I used the fizz-buzz expressions above and found that the multiple line expression was about 6% slower. I leave it to the reader to decide whether the improved legibility of the expression is a fair trade for this additional execution time.

!!!!More?
That is what I know so far. I have not been able to find anything in the documentation about these multiple-line ~HScript expressions. There is a page on [[creating custom expression functions|http://localhost:48626/expressions/_custom]] and a page regarding [[quoting and embedding expressions|http://localhost:48626/expressions/_quoting]] in strings. The //[[print()|http://localhost:48626/expressions/print]]// and //[[vector3()|http://localhost:48626/expressions/vector3]]// expression functions are [[documented, along with many others.|http://localhost:48626/expressions/]]

If you find something or discover a nuance of the syntax, please [[let me know|mailto:ken@kennethahuff.com]] so that I may update this note.
!!!!Lighting and rendering
SideEffects has posted a tutorial video, //[[Mantra & Houdini 11|http://www.sidefx.com/index.php?option=com_content&task=view&id=1412&Itemid=132]]//, that works through lighting and rendering with //Houdini 11//. Below are some general notes and comments specific to [[project 1|VSFX 350: Procedural building project]]

Start and end times are approximate.

''00:00-23:00 &mdash; Lighting setup''

The narrator has some technical difficulties during the portal light section of the video (17:00-23:00). To summarize, if you are using an environment light around an enclosed volume, your camera is inside the volume, and the volume has opening which reveal the outside space containing the environment, portal lights provide much cleaner, more focused (and efficient) sampling of the environment light.

''No indirect lighting for project 1'' (i.e., photon map generation). Do not use the //GI Light// or the //Caustic Light// without prior permission. (23:00-37:00, in video). For the purposes of project 1, you may skip this section of the video on initial viewing, but should go back to it later if you are interested in photo-based rendering.

Peter Quint also has posted videos on lighting in //Houdini 11// ([[one|http://vimeo.com/14507733]] and [[two|http://vimeo.com/14508661]]; approximately one hour, in total).

''37:00-53:00 &mdash; Material workflow''

This section gives a nice, quick overview of the new [[Surface Model VOP|http://localhost:48626/nodes/vop/surfacemodel]] (used to create the [[Mantra Surface material|http://localhost:48626/gallery/shop/vopmaterial/mantrasurface]] in the [[Materials/Shaders Gallery|http://localhost:48626/gallery/shop/vopmaterial/]] of the [[Material Palette|http://localhost:48626/ref/panes/materialpalette]].

Peter Quint also has posted videos on materials in //Houdini 11// ([[one|http://vimeo.com/14092187]] and [[two|http://vimeo.com/14092931]]; approximately one hour, in total).

''53:00-57:00 &mdash; Variance anti-aliasing''

Mantra ROP -> Properties tab -> Sampling tab -> Noise Level parameter is a very important control parameter. Lowering the value will cause more samples to be taken and therefore improve anti-aliasing quality. More sample = more render time. This applies to renderings using the Mantra Surface Model and PBR rendering.

Using extra image planes (Mantra ROP -> Properties tab -> Output tab -> Extra Image Planes) and the //Direct ray samples// ({{{direct_samples}}}) or //Indirect ray samples// ({{{indirect_samples}}}) VEX Variables is a very nice way to keep track of the number of samples being taking in specific areas of an image. These image plances should be disabled for final rendering as they will contribute to frame file size. (at 54:00 in video)

''57:00-59:00 &mdash; Ptex &mdash;'' Not specifically appropriate for project 1, but of interest.

''59:00-END &mdash; Volumetric rendering &mdash;'' Not specifically appropriate for project 1, but of interest.

!!!!Documentation links
*[[Rendering|http://localhost:48626/rendering/]] &mdash; The top-level page on the subject.
*[[Understanding Mantra rendering|http://localhost:48626/rendering/understanding]]
*[[Rendering FAQ|http://localhost:48626/rendering/faq]]
*[[Render quality and improving render time|http://localhost:48626/rendering/renderquality]]
This is an overview of the steps through which I go to make an [[anaglyph|http://en.wikipedia.org/wiki/Anaglyph_image]] based on two photographic stills.

!!!!Sample stereographic pairs
[[You can download a small set of sample stereographic photographs here.|http://dl.dropbox.com/u/7754637/SampleStereoscopicPairsJPEGs.zip]]

[img[Sample stereo pair|inclusions-stereo/000_SampleStereoPairParallel.jpg]]

This is one of the image pairs included above and the one that I will be used for this tutorial. If you can free-view stereopairs, the pair is presented here as //parallel// (left on the left, right on the right), as opposed to //crossed// (left on the right, etc.).

!!!!Photoshop templates
I use //Adobe Photoshop// to create anaglyphs. I have prepared two templates, one for vertical compositions and another for horizontal compositions. [[You can download the templates here.|http://dl.dropbox.com/u/7754637/anaglyph_templates.zip]] The .zip file contains two .psd files.

Currently, the files have resolutions of 5,200 by 3,500 pixels. This is based on the resolution of my current camera, a [[Sony NEX-5N|http://www.dpreview.com/products/sony/slrs/sony_nex5n]], which has a native resolution of 4,912 by 3,264. I added a margin of approximately 300 pixels on each axis to create a reasonable work area. I suggest that you resize and save the template files to match the resolution of your camera.

!!!!General workflow
Here are the overall stages of the workflow:
* Take some photographs
* Review and select the stereoscopic pairs in a photo management application
* Prepare a copy of the appropriate template file for a specific pair of images
* Create the anaglyph

The first two stages will be covered in additional notes to follow (links will be added above).

Major steps below will be indicated in ''bold'' lines with a manicule ({{kManicule{&#9758;}}})

!!!!Preparation of template file
The following instructions will assume you have two images, {{{0123.tif}}} and {{{0124.tif}}}, for the left and right eye respectively. Your files do not have to be ~TIFFs. I am using numerical filenames because that is what most cameras produce.

''{{kManicule{&#9758;}}} Duplicate and rename the copy of the appropriate template file.''

Assuming the left/right pair are horizontal compositions, I would duplicate the {{{00 anaglpyh horizontal template.psd}}} and rename it {{{0123-4.psd}}}.

''{{kManicule{&#9758;}}} Open the template and the two photographs in Photoshop.''

!!!!Preparing the anaglyph in Photoshop
''{{kManicule{&#9758;}}} Duplicate the two photographs into the template document.''

//You will repeat this step for both the left and right eye images.//

Activate the document/tab with the respective photograph.

In the Layers Palette (Windows menu > Layers), double-click on the //Background// layer. This should bring up a //New Layer// dialog box. Use this dialog box to rename the layer with the photograph&rsquo;s image number, e.g., //Background// becomes //0123//.

[img[Renaming and duplicating the image for one eye|inclusions-stereo/013_RenameAndDuplicateLayer.png]]

With this newly-renamed layer selected in the Layers Palette, duplicate the layer into the template document. There are other ways, but I use the //Duplicate Layer&hellip;// command, either through the Layer menu > Duplicate Layer&hellip; or by right-clicking or control-clicking on the layer in the Layers Palette and selecting Duplicate Layer&hellip; from the resulting pop-up menu. In the //Duplicate Layer// dialog box, change the Destination Document to the anaglyph template, then click OK.

Close the photograph document without saving changes.

Repeat this process for the photograph for the other eye. After doing so, only the template .psd file should be open and it should contain copies of your two photographs on separate layers, in addition to all of the layers and layer sets (folders) from the template file. Your Layers Palette should look something like the following illustration.

[img[Initial layer arrangement in template|inclusions-stereo/016_LayersInTemplate.png]]

!!!!Interrupting our tutorial for some useful keyboard shortcuts&hellip;
|//Shortcut//|//Result//|
|v|Activates the Move Tool|
|c|Activates the Crop Tool|
|tab|Hides/reveals all of the toolbars and palettes|
|f|Cycles between regular display mode and two different full-screen modes|
|Command* + 0 (zero)|Fit image to window|
|Command* + 1|Zoom to 100% (one image pixel for one screen pixel)|
{{{*}}} Control key if using Windows

!!!!Back to our images&hellip;
''{{kManicule{&#9758;}}} Center and sort the two photograph layers.''

In the Layers Palette, select both of the photograph layers using the shift key. With the Move Tool (keyboard shortcut: v [just press //v//; no modifier keys]), click and drag on the photograph layers to center them within the document.

Now sort the two photograph layers into their appropriate positions in the Layer Palette. The left eye image layer should end up below the Hue/Saturation adjustment layer in the &ldquo;LEFT (R Only)&rdquo; layer set (folder) and the right eye image layer should end up below the adjustment layer in the &ldquo;RIGHT (GB Only)&rdquo; layer set. Your Layers Palette should look this:

[img[Properly sorted layer arrangement in template|inclusions-stereo/017_SortedLayers.png]]

And, your document should look like an anaglyph (but don&rsquo;t look at it with your anaglyph glasses just yet &mdash; the images are not aligned and likely will cause eye strain &mdash; ouch!) If your document does not look like an anaglyph, your image layer likely are outside of the respective layer sets. In the Layers Palette, click and drag the image layers onto the respective layer set layers.

This will drop the layers inside the appropriate layer set. Also at this point, it is a good idea to double check that the left eye image ended up in the left eye layer set and the same for the right. I always take my photographs in left-right order, so I know that the lesser of the two image numbers will be in the left eye image.

[img[Image layers outside their respective layer sets|inclusions-stereo/018_SortedLayersBad.png]]

''{{kManicule{&#9758;}}} Align a feature in the images.''

The next step is to align the two images. This will start the process of removing as much vertical parallax as possible and will establish the plane of //zero parallax// value (the portion of stereoscopic depth space that will be at the screen, versus appear behind the physical screen or in front of the screen in viewer space).

I like to look for a small feature with high contrast (a light-colored object on a dark background or vice versa) that would have been as close to the physical camera as possible, somewhere in the approximate horizontal center of the image. The feature also needs to appear in both photographs. Here are two candidates in the demonstration photographs:

[img[Candidates for image alignment|inclusions-stereo/020_AlignmentFeature.jpg]]

I am going to work with the element to the lower right. Here is an enlargement indicating the selected feature as it appears in both eyes:

[img[Before alignment|inclusions-stereo/021_AlignmentFeatureBefore.jpg]]

When aligning a stereopair, typically, I modify only the right eye image.

With right eye image layer selected, activate the Move Tool and move the right eye layer until all of the blue and red color fringe disappears. Here is the image after image alignment:

[img[After alignment|inclusions-stereo/022_AlignmentFeatureAfter.jpg]]

Notice that as you move away from the alignment feature, there is more and more blue/red color fringe, indicating increasing parallax. This indicates that the other elements in this crop are at a different depth in stereo space, relative to our alignment feature.

''{{kManicule{&#9758;}}} Work to align the remainder of the image.''

Ideally, everything in the image should align vertically (no vertical parallax) and only show red/blue color fringe to the left and right of visual features. In case of the illustration above, everything does align well. But, as we zoom out from the alignment element, working out from it, vertical parallax starts to show up. For example, here is a detail from the upper right corner of the anaglyph:

[img[Vertical parallax|inclusions-stereo/023_VerticalParallax.jpg]]

I have added the two green lines to highlight the vertical parallax between the left and right eye in this area.

If you are hand-holding the camera that you use to take a stereopair, it is almost impossible to move the camera exclusively along a horizontal axis. Often the camera will rotate (pitch, yaw and/or roll). You may also move the camera closer or further from the subject and/or introduce vertical movement. All of this leads to vertical parallax in the stereopair.

I do not try to get the alignment perfect. I want this to be fun, not tedious. In the case of this image, though, a minor rotation of the right image by -0.4 degrees will improve the vertical parallax.

With the right eye layer selected, activate the Free Transform tool (Edit menu > Free Transform or Command t). Before transforming the layer, first move the pivot point (which defaults to the center of the layer) to rest on top of the visual element you used for alignment. This will cause the transformation to occur relative to that position rather than relative to the center of the image. 

[img[Free Transform pivot|inclusions-stereo/030_FreeTransformPivot.png]]

Just move your cursor over the tiny, very hard to find, pivot indicator and then click and drag it to a new location.

Because the amount of rotation is so small, I like to type the values in the tool option bar that appears at the top of the screen

[img[Rotation angle text entry|inclusions-stereo/031_RotationEntry.png]]

If you click in the indicated text field, you can use the up and down arrow keys to rotate in 0.1 degree increments or Shift and the arrow keys to rotate in 1 degree increments. Easier that clicking and dragging if you want or need precision.

//Other transformations//

You may find that scaling the right eye image disproportionally might fix some alignment issues. Again, you will use the Free Transform command, but this time, click and drag the handles in the centers of the edges of the image. Before doing this, you should move the pivot to the location of your original alignment element.

Sometimes, a perspective correction is necessary (Edit menu > Transform > Perspective). With this command, gives different results based on which scale handle you choose and which direction you drag. Difficult to put in words, but intuitive with a bit of experimentation.

''{{kManicule{&#9758;}}} Experiment with horizontal image translation.''

Once your images are aligned, you can shift the two eyes horizontally relative to each other. This //horizontal image translation// will shift where object appear to fall in the depth of the image. Experiment.

''{{kManicule{&#9758;}}} Save your master image.''

Once you have worked out alignment issues and minimized vertical parallax, save your .psd file. This will become your master image.

''{{kManicule{&#9758;}}} Crop.''

When cropping the image (using the Crop Tool in the Tool Palette [or press c]), you want to eliminate any overhang from the individual images (left and right), eliminate any edge violations (especially on the left and right of frame). Edge violations occur when an object in your image crosses over from screen space to viewer space while touching or intersecting the sides of the frame. It also occurs when an object is supposed to be entirely in viewer space, but at the same time it touches the edge of frame.

Also while cropping, keep in mind the overall composition of the image. Personally, I don't worry about maintain a particular aspect ratio (the proportion of width and height of the image). I crop so that I have a nice composition and so that I reduce stereoscopic problems.

Notice that I am cropping after I save my &ldquo;master&rdquo; image. Sometime I want to revisit the horizontal image translation or the overall composition of a stereopair, so I like to keep the extra pixels around (for posterity).

''{{kManicule{&#9758;}}} Prepare a resized JPEG version of the anaglyph.''

[[I have placed the instructions for preparing the JPEG images in a separate note.|Stereoscopic photography class: Image submission guidelines]]
This is a space for testing formatting from Wiki markup and CSS.
!!!!Heading
One paragraph with some quick brown fox jumping over the lazy dog text. One paragraph with some quick brown fox jumping over the lazy dog text. One paragraph with some quick brown fox jumping over the lazy dog text. One paragraph with some quick brown fox jumping over the lazy dog text. One paragraph with some quick brown fox jumping over the lazy dog text. One paragraph with some quick brown fox jumping over the lazy dog text.

Another paragraph with more quick brown fox jumping over the lazy dog text. Another paragraph with more quick brown fox jumping over the lazy dog text. Another paragraph with more quick brown fox jumping over the lazy dog text. Another paragraph with more quick brown fox jumping over the lazy dog text. Another paragraph with more quick brown fox jumping over the lazy dog text. Another paragraph with more quick brown fox jumping over the lazy dog text. 

{{{this would be }}}//{{{some}}}//{{{ code}}}
(These are some notes that I use in the creation and maintenance of this ~TiddlyWiki. &mdash; KAH)

{{kManicule{&#9754; &#9755; &#9756; &#9757; &#9758; &#9759; &#9760;}}}

{{kManicule{&#9758;}}} This would be an important point that I would like to make. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque ac lectus vitae sapien tincidunt lobortis id non tellus. Nulla eget ante quis enim condimentum laoreet id vitae lacus. Etiam hendrerit cursus dictum. Sed sit amet libero ante, in fermentum lacus. Nam sit amet massa ac dolor tincidunt lacinia. Nulla mattis augue et mi sagittis posuere.

!!!!Frequently used internal links
StyleSheet
ViewTemplate
MainMenu
PageTemplate
ColorPalette
!!!!Handy links
http://www.tiddlywiki.com/ &mdash; main ~TiddlyWiki page
[[Basic markup|http://tiddlywiki.org/#%5B%5BBasic%20Formatting%5D%5D]]

!!!!HTML Entities
Entities in HTML documents allow characters to be entered that cannot easily be typed on an ordinary keyboard or that do not translate well from platform to platform without special encoding. They take the form of an ampersand (&), an identifying string, and a terminating semi-colon (;). There is a complete reference [[here|http://www.htmlhelp.com/reference/html40/entities/]]; some of the more common and useful ones are shown below. (KAH: minor edits below; [[original source|http://www.tiddlywiki.com/#HtmlEntities]])

|>|>|>|>|>|>| !HTML Entities |
| &amp;nbsp; | &nbsp; | no-break space | &nbsp;&nbsp; | &amp;apos; | &apos; | single quote, apostrophe |
| &amp;ndash; | &ndash; | en dash |~| &amp;quot; | " | quotation mark |
| &amp;mdash; | &mdash; | em dash |~| &amp;prime; | &prime; | prime; minutes; feet |
| &amp;hellip; | &hellip; |	horizontal ellipsis |~| &amp;Prime; | &Prime; | double prime; seconds; inches |
| &amp;copy; | &copy; | Copyright symbol |~| &amp;lsquo; | &lsquo; | left single quote |
| &amp;reg; | &reg; | Registered symbol |~| &amp;rsquo; | &rsquo; | right  single quote |
| &amp;trade; | &trade; | Trademark symbol |~| &amp;ldquo; | &ldquo; | left double quote |
| &amp;dagger; | &dagger; | dagger |~| &amp;rdquo; | &rdquo; | right double quote |
| &amp;Dagger; | &Dagger; | double dagger |~| &amp;laquo; | &laquo; | left angle quote |
| &amp;para; | &para; | paragraph sign |~| &amp;raquo; | &raquo; | right angle quote |
| &amp;sect; | &sect; | section sign |~| &amp;times; | &times; | multiplication symbol |
| &amp;uarr; | &uarr; | up arrow |~| &amp;darr; | &darr; | down arrow |
| &amp;larr; | &larr; | left arrow |~| &amp;rarr; | &rarr; | right arrow |
| &amp;lArr; | &lArr; | double left arrow |~| &amp;rArr; | &rArr; | double right arrow |
| &amp;harr; | &harr; | left right arrow |~| &amp;hArr; | &hArr; | double left right arrow |

The table below shows how accented characters can be built up by subsituting a base character into the various accent entities in place of the underscore ('_'):

|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>| !Accented Characters |
| grave accent | &amp;_grave; | &Agrave; | &agrave; | &Egrave; | &egrave; | &Igrave; | &igrave; | &Ograve; | &ograve; | &Ugrave; | &ugrave; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; |
| acute accent | &amp;_acute; | &Aacute; | &aacute; | &Eacute; | &eacute; | &Iacute; | &iacute; | &Oacute; | &oacute; | &Uacute; | &uacute; | &nbsp; | &nbsp; | &Yacute; | &yacute; | &nbsp; | &nbsp; |
| circumflex accent | &amp;_circ; | &Acirc; | &acirc; | &Ecirc; | &ecirc; | &Icirc; | &icirc; | &Ocirc; | &ocirc; | &Ucirc; | &ucirc; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; |
| umlaut mark | &amp;_uml; | &Auml; | &auml; |  &Euml; | &euml; | &Iuml; | &iuml; | &Ouml; | &ouml; | &Uuml; | &uuml; | &nbsp; | &nbsp; | &Yuml; | &yuml; | &nbsp; | &nbsp; |
| tilde | &amp;_tilde; | &Atilde; | &atilde; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &Otilde; | &otilde; | &nbsp; | &nbsp; | &Ntilde; | &ntilde; | &nbsp; | &nbsp; | &nbsp; | &nbsp; |
| ring | &amp;_ring; | &Aring; | &aring; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; |
| slash | &amp;_slash; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &Oslash; | &oslash; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; |
| cedilla | &amp;_cedil; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &Ccedil; | &ccedil; |
[[Book recommendations]]

// under active development//
[[My favorite command line tools]]
[[Proceduralism: Notes]]

move Linux and other *nix operating systems (command line stuff) to separate note from [[Special topics]]

End-of-quarter clean up
*311: first pass done
*312: first pass done
*350: first pass done
For more information on Ken and his work, visit
*[[Artwork|http://www.kennethahuff.com/]]
*[[Blog|http://www.kennethahuff.com/blog/]]
*[[Vimeo|http://www.vimeo.com/kennethahuff]]
*[[Twitter|http://www.twitter.com/kennethahuff]]
*[[LinkedIn|http://www.linkedin.com/in/kennethahuff]]
*[[Facebook|http://www.facebook.com/kennethahuff]]
*Apple has posted [[a very good primer for shell scripting.|http://developer.apple.com/mac/library/DOCUMENTATION/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html]] It is written to be platform agnostic &mdash; the information applies to Linux, OS X and Cygwin (command line utilities for Windows).
*The //[[Linux Documentation Project|http://tldp.org/]]// has a number of on-line/downloadable guides that might be of use
**[[Overall guide index|http://tldp.org/guides.html]]
**//[[Bash Guide for Beginners|http://tldp.org/LDP/Bash-Beginners-Guide/html/index.html]]//
**//[[Advanced Bash-Scripting Guide|http://tldp.org/LDP/abs/html/index.html]]//
**//[[GNU/Linux Command-Line Tools Summary|http://tldp.org/LDP/GNU-Linux-Tools-Summary/html/index.html]]//
**etc. (there are many more; see [[the index|http://tldp.org/guides.html]])
*So you want to change the configuration of your shell prompt? Or its color?
**[[Changing your bash shell prompt|http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html]] or [[this one|http://www.ibm.com/developerworks/linux/library/l-tip-prompt/]] or [[this very detailed one|http://tldp.org/HOWTO/Bash-Prompt-HOWTO/]]
**[[Changing the color of your bash shell prompt|http://www.cyberciti.biz/faq/bash-shell-change-the-color-of-my-shell-prompt-under-linux-or-unix/]]
Some of these are general, some are application-specific. Some items are cross-listed.
!!!!Freshly-found (and currently uncategorized)
*http://motion.kodak.com/US/en/motion/Education/Tools_for_Educators/index.htm
!!!!Blend modes
The subject of blend modes comes up when discussing the Layered Texture in //Maya// (and of course when talking about layers in //Photoshop//). Here are some documents that describe, illustrate, demonstrate and give the code for various blend nodes:
*[[Blend modes page|http://en.wikipedia.org/wiki/Blend_modes]] on //Wikipedia// (never rely on the big //W// as a final source, but it usually is a good jump off point).
*[[The code behind many blend modes|http://www.nathanm.com/photoshop-blending-math/]] and [[expressed another (possibly easier-to-digest) way.|http://blog.deepskycolors.com/archivo/2010/04/21/formulas-for-Photoshop-blending-modes.html]]
*A post by Joseph Francis on [[killer applications of blend modes|http://www.digitalartform.com/archives/2009/06/photoshop_blend.html]] (many links, good stuff).
*[[How to recreate many of the Photoshop blend modes using curves.|http://photoshopnews.com/2007/09/05/how-to-express-blend-modes-as-curves/]]
*[[Compositing Digital Images|http://keithp.com/~keithp/porterduff/]] by Thomas Porter and Tom Duff (1984) &mdash; The original SIGGRAPH paper describing matte components to rendered images and their use in compositing those images together.
!!!!Color
*[[Color glossary|http://www.sapdesignguild.org/resources/glossary_color/]] &mdash; An illustrated collection of terms related to color and visual perception.
*[[Color vision|http://webvision.med.utah.edu/Color.html]] &mdash; An overview article about color vision and perception by Peter Gouras.
!!!!File formats
*//[[Ptex|http://ptex.us/]]// is a texture mapping system developed by Walt Disney Animation Studios for production-quality rendering that does not require the use of UV coordinates. It is a feature in //[[RenderMan Pro Server 15.0|https://renderman.pixar.com/products/news/rps15.0_release.html]]//. //Ptex// was released as open source in early 2010, so we should be seeing more of it in our favorite tools.
!!!!File management
*Steven Roselle (Autodesk) has [[a post regarding file management for texture files|http://area.autodesk.com/blogs/stevenr/test]] in //Maya//. If you forget to set your project, ending up with absolute paths for texture file locations, Steven walks you through some nifty tools to fix the problem.
!!!!Fur
*Christopher Cherubini has written [[a tutorial|http://library.creativecow.net/articles/cherubini_christopher/brushstroke_paint.php]] on using //Maya// Fur for a painterly, non-photorealistic look.
*Zap Anderson ([[blog|http://mentalraytips.blogspot.com/]]) has [[a post regarding lighting, shadows and rendering fur in mental ray.|http://mentalraytips.blogspot.com/2007/10/hot-fuzz-hair-revisited.html]]
*David Johnson (blog) has [[a post on his process for developing fur for a polar bear.|http://www.djx.com.au/blog/2008/06/29/tips-for-rendering-maya-fur-with-mentalray/]] It demonstrates good problem solving and a methodical approach to a complex technical issue.
*By way of inspiration, [[here is a 6.5 gigapixel photograph of an eagle feather.|http://www.gigamacro.com/gigapixel_macro_photography_gallery_eagle_feather.php]]
!!!!Lighting
*Zap Anderson ([[blog|http://mentalraytips.blogspot.com/]]) has written [[an excellent post on ambient occlusion.|http://mentalraytips.blogspot.com/2008/11/joy-of-little-ambience.html]]
!!!!Maya workflow
*[[Maya: Toggling the update of render thumbnails]]
!!!!Noise
*[[Ken Perlin|http://mrl.nyu.edu/~perlin/]] invented noise as it is used in procedural texturing in 1983. [[This presentation|http://www.noisemachine.com/talk1/]] walks through the history and some of the techniques. Dr. Perlin received a [[Technical Achievement Award|http://mrl.nyu.edu/~perlin/doc/oscar.html]] from the Academy of Motion Picture Arts and Sciences for this work in 1997.
!!!!Non-photorealitic rendering (NPR)
*Christopher Cherubini has written [[a tutorial|http://library.creativecow.net/articles/cherubini_christopher/brushstroke_paint.php]] on using //Maya// Fur for a painterly, non-photorealistic look.
*Craig Reynolds (of //[[Boids|http://www.red3d.com/cwr/boids/]]// fame) has complied [[an extensive list of papers and resources for non-photorealistic rendering (NPR).|http://www.red3d.com/cwr/npr/]] Many of the links are dead, but a quick //Google// search should turn up a given resource.
!!!!Normal maps
*Ryan Clark has posted [[a tutorial for using multiple photographs for the creation of normal maps.|http://www.zarria.net/nrmphoto/nrmphoto.html]] Once you have the normal map, a program such as //Crazy Bump// can be used to create a displacement map. (Thanks to Megan Stifter for finding the link.)
!!!!Photography
*[[www.cambridgeincolour.com|http://www.cambridgeincolour.com/]] &mdash; A nice set of easily-digestable photography tutorials, many of which are directly applicable to look development.
*Cross-polarization &mdash; Here are a couple of good tutorial/examples: [[one|http://onsetvfxtips.blogspot.com/2009/06/cross-polarization-photography-and-skin.html]] and [[two|http://www.naturescapes.net/042004/wh0404.htm]].
!!!!Render passes
*Autodesk has published a [[whitepaper on render passes.|http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=13583699]]
!!!!Subsurface scattering, skin, and translucent materials
*[[Scott Spencer|http://www.scottspencer.com/]] posted a demonstration video for painting skin in ZBrush. There is a link to the video (no audio) on [[a ZBrush Central forum post.|http://www.zbrushcentral.com/zbc/showthread.php?t=48840]] A few comments down in the post, Scott gives a brief breakdown of the process. This technique is not using //misss_fast_skin_maya//, but the intent is very similar. For example, when Scott starts by painting vibrant colors, think of the subdermal layer of //misss_fask_skin_maya//. (Thanks to Kerry Anderson for the link.)
//Looking and Seeing// is a series of talks I have given in different forms and different forums. The series focuses on some of the fundamental elements and principles of art and design; developing the artistic eye; and creative problem solving. The talks are richly illustrated with a wide range of visuals. The following artists and references are used throughout the series, but are listed in roughly the order of first mention.

//See also//
*//[[Looking and Seeing Differently]]// &mdash; a version of one of the talks that I presented at SIGGRAPH Asia 2011.
*//[[Brain Kibble|http://www.kennethahuff.com/blog/category/brain-kibble/]]// &mdash; Closely related to //Looking and Seeing//, //Brain Kibble// is my way of sharing the random flotsam and jetsam that inspires and intrigues me and my work.
Here are some artist, idea, video, and image references that were mentioned and shown in a presentation that I gave on Thursday, 15 December 2011 at SIGGRAPH Asia 2011. The talk, //Looking and Seeing Differently//, part of (and based on) materials from an ongoing artist enrichment series, explores how to see our creative work and our world with fresh eyes. (The following are not in any particular order.)
*[[Eirik Solheim|http://eirikso.com/2008/12/27/one-year-worth-of-images-give-some-amazing-videos/]]&rsquo;s 2005 time-lapse photography experiment was a series of images take out of his window. He has made some versions of his photo sequences available under a Creative Commons license.
*[[Solarography|http://www.solargraphy.com/]] &mdash; looooong photographic exposures tracing the path of the sun. Howtos and a gallery of images from around the world.
*[[Justin Quinnell|http://www.pinholephotography.org/]]&rsquo;s [[pinhole|http://en.wikipedia.org/wiki/Pinhole_camera]] and [[camera obscura|http://en.wikipedia.org/wiki/Camera_obscura]] photography.
*[[Ollipekka Kangas|http://solarigrafia.blogspot.com/]] has created a series ultra-long photographic exposures using constructed pinhole cameras. [[Here are some of his photos and documentation of the pinhole photographic devices.|https://picasaweb.google.com/109015420135977051629/SOLARIGRAPHICFILES]]
*[[Étienne-Jules Marey|http://en.wikipedia.org/wiki/Étienne-Jules_Marey]] and [[chronophotography|http://en.wikipedia.org/wiki/Chronophotography]].
*Suzy Leli&egrave;vre&rsquo;s //[[Gravity dice|http://www.suzylelievre.fr/oeuvres/des-gravite4]]//.
*Niklas Roy&rsquo;s //[[Electronic instant camera|http://www.niklasroy.com/project/103/electronic_instant_camera]]//.
*[[High Resolution Imaging Science Experiement (HiRISE)|http://hirise.lpl.arizona.edu/]] on the [[Mars Reconnaissance Orbiter (MRO)|http://mars.jpl.nasa.gov/mro/]].
*Robert Therrien&rsquo;s [[sculpture|http://www.gagosian.com/artists/robert-therrien/selected-works]] (giant table and chairs).
*Jimmy Chen&rsquo;s [[comparison of the hairstyles of Peter Lynch to master artworks.|http://htmlgiant.com/craft-notes/the-painter/]]
*Some of my collected words: virescent, jactitation, infrangible, longueur, objet trouvé, umbra, umbrageous, fissiparous, ineffable, tenacious, momentism, hypertelic and simultaneity. (I will leave it to the reader to find the definitions&hellip;because that is more than half the fun.)
*Naoko Ito&rsquo;s //[[Urban Nature 2011|http://naoko-ito.com/website2011-010.html]]// series.
And to maintain my &ldquo;greek cred&rdquo;, here is the [[Python|http://www.python.org/]] one-liner that I used to generate the hexadecimal slide:
{{{>>> print('\n'.join([' '.join([hex(m + (n * 16) + 256).upper()[3:] for m in range(16)]) for n in range(16)]))}}}

And you might enjoy //[[Brain Kibble|http://www.kennethahuff.com/blog/category/brain-kibble/]]//&hellip;

//See also// //[[Looking and Seeing]]//
[[Welcome]]

//Topics//
[[Houdini|Houdini: Links]]
[[Python|Python notes]]
//[[Looking and Seeing]]//
//[[more...|Special topics]]//

//[[Course archive|Archive]]//

//Kenneth A. Huff//
[[Artwork|http://www.kennethahuff.com/]]
[[Blog|http://www.kennethahuff.com/blog/]]
[[Vimeo|http://www.vimeo.com/kennethahuff]]
[[Twitter|http://www.twitter.com/kennethahuff]]
/%[[Flickr|http://www.flickr.com/kennethahuff]]
%/[[LinkedIn|http://www.linkedin.com/in/kennethahuff]]
[[Facebook|http://www.facebook.com/kennethahuff]]
[[Contact]]

[[Brain kibble|http://www.kennethahuff.com/blog/category/brain-kibble/]]
Some people have been experiencing problems moving from dual monitor systems to the Cintiq monitors in room 204. When new windows/panels are accessed, such as the Connection Editor, they do not appear. Due to a preference, they are appearing off the screen.

There are two methods to fix this:

''Method 1:'' Quit //Maya//. Delete the file, {{{maya/}}}//{{{version}}}//{{{/prefs/windowPrefs.mel}}}. Restart //Maya//.

''Method 2:'' Restart //Maya//. Type the following in the MEL command line, followed by Enter: {{{windowPref -removeAll;}}} Restart //Maya//.

If you permanently want to avoid this issue, go to the Window menu > Settings/Preferences > Preferences: //Interface// Category. Turn off //Windows: Remember size and position//.
!!!!Ken&rsquo;s notes and tutorials
*[[Maya: Particle expressions]] &mdash; Some notes on execution of particle expressions in //Maya//
*[[Maya: Toggling the update of render thumbnails]]
*[[Maya dual-monitor fix]] &mdash; Moving from a dual-monitor set-up to a single-monitor setup can cause some window placement issues; this note suggests some solutions.
!!!!The Standards
*[[Autodesk whitepapers|http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=13583699]] &mdash; Currently contains whitepapers on Nucleus, render passes and ~OpenMaya API programming. These are good, tight bundles of information.
*[[Here is a very good Maya Wiki.|http://www.tokeru.com/t]]
!!!!Python in Maya
*[[PyMEL|http://code.google.com/p/pymel/]] &mdash; Makes Python in //Maya// more Python-like. Highly recommended. A version of ~PyMEL is included with //Maya 2011.// (Woo hoo!) If you are using an earlier version of //Maya//, I recommend working with a 1.x version of ~PyMEL.
*[[Python (in Maya) Wiki|http://pythonwiki.tiddlyspot.com/]] &mdash; A Wiki, wirtten by Eric Pavey, containing Python tips mostly pertaining to Python in Maya
*//See also// [[Python notes]]
!!!!MEL
*[[mel wiki|http://mayamel.tiddlyspot.com/]] &mdash; A Wiki, written by Eric Pavey, containing a large collection of MEL tips, tricks and reference.
**[[Visual list of UI controls in Maya|http://mayamel.tiddlyspot.com/#%5B%5BVisual%20Guide%20of%20UI%20Controls%5D%5D]]
!!!!mental ray
*[[Los Angeles mental ray User Group|http://www.lamrug.org/]] has all sorts of mentral ray-related goodies
**A personal favorite is the [[page explaining sampling in mental ray,|http://www.lamrug.org/resources/samplestips.html]] a good thing to understand for high-quality and efficient renderings.
*[[Zap Andersson|http://mentalraytips.blogspot.com/]] is an engineer with mental images who worked on the fast skin material. His blog is a great resource for mental ray information.
*[[mymentalray.com|http://www.mymentalray.com/]] is a growing resource for the mental ray community.
!!!!Dynamics
*The blog of Autodesk&rsquo;s [[Duncan Brinsmead|http://area.autodesk.com/blogs/duncan]] is a great resource for technical tips and tricks.
!!!!Individuals and blogs
*[[Maya Station|http://mayastation.typepad.com/]] &mdash; A blog written by members of the //Maya// product support team at Autodesk.
*David Johnson &mdash; Author of the [[djx blog|http://www.djx.com.au/blog/]]; blog contains many useful //Maya// tricks and hacks. Mr. Johnson has written [[a number of scripts|http://www.djx.com.au/blog/downloads/]] that enhance //Maya// and serve as fantastic examples of the possibilities of scripting.
*Eric Pavey &mdash; [[mel wiki|http://mayamel.tiddlyspot.com/]] and [[Python (in Maya) Wiki|http://pythonwiki.tiddlyspot.com/]].
*[[Peter Shipkov|http://petershipkov.com/]] &mdash; Mr. Shipkov has created a number of very interesting and powerful toolsets and workflows. Two favorites of mine: //[[SOuP|http://petershipkov.com/development/SOuP/SOuP.htm]]// (adds a large number of procedural tools to //Maya//) and //[[Overburn|http://petershipkov.com/development/overburn/overburn.htm]]// (uses particles and fluids to create very detailed and potentially realistic volumetric effects)
What follows is pseudo-code that describes how and when particle expressions are executed in Maya.

It is instructive to think of the particle expressions as being executed inside two nested loops.
{{{
for each change of current time
{
    for each living particle
    {
        if (current particle's age is 0)
        {
            particle creation expression is executed
        }
        else
        {
            particle runtime expressions are executed
        }
    }
}
}}}
Typically, if these loops were explicit, there would be iteration variables available. In the following example, {{{$i}}} is the iteration variable and a gateway to producing unique results for each step of the for loop.
{{{
int $i;
for ($i = 0; $i < $someNumber; $i++)
{
    ...
}
}}}
During the implicit particle system loops, there //are// iteration variables available. For the &ldquo;each change of current time&rdquo; loop, we have the {{{time}}} and {{{frame}}} keywords. time = frame / (frames per second). Time is measured in seconds and fractions of seconds. Frame 18 of an animation playing as 24 fps would be time 0.75.

With the &ldquo;each living particle&rdquo; loop, the per-particle attribute {{{.particleId}}} can be used as the iteration variable. Every particle ever created or born into a particle system will have a unique particleId, starting at 0. There is a catch...if a particle dies, its particleId is not reused. The &ldquo;each living particle&rdquo; loop takes care of this detail behind the scenes. Even though all per-particle attributes are stored in array attributes of the particle shape node, during the particle expression evaluation, we only have direct access to the values for each //individual// particle in turn.

Through the {{{particle}}} command, we have access to another index number for particles using the {{{-order}}} flag. Here is a MEL example that uses the {{{-order}}} flag and {{{particleShape.count}}} attribute to iterate over a set of particles, randomly distributing the particles as if on a sphere with a radius of 5.0 and setting the color of the particles.
{{{
{
    float $startTime = `timerX`;

    string $results[] = `particle -lowerLeft -5 -5 -5 -upperRight 5 5 5 -gridSpacing 0.5`;
    string $particleShape = $results[1];
    
    // Add the per-particle attributes that store color.
    addAttr -longName "rgbPP" -dataType vectorArray $particleShape;
    addAttr -longName "rgbPP0" -dataType vectorArray $particleShape;

    float $radius = 5.0;
    int $particleCount = `getAttr ($particleShape + ".count")`;
    int $p;
    for ($p = 0; $p < $particleCount; $p++)
    {
        vector $random = unit(sphrand(1)) * $radius;
        particle -edit -order $p -attribute "position" -vectorValue ($random.x) ($random.y) ($random.z) $particleShape;
        vector $randomColor = hsv_to_rgb(<<rand(1.0), 1.0, 1.0>>); 
        particle -edit -order $p -attribute "rgbPP"
                 -vectorValue ($randomColor.x) ($randomColor.y) ($randomColor.z) $particleShape;
    }

    float $elapsedTime = `timerX -startTime $startTime`;
    print("// elapsed time: " + $elapsedTime + " seconds.\n");
}
}}}
The {{{-order}}} flag gives us access to the array of current particles, regardless of their {{{.particleId}}} values.

The previous example is contrived. Typically, the manipulation of the particles above would be accomplished with a particle expression. For example, if we wanted that kind of randomization to happen only once when the particles were created/born, we might use a creation expression. The following MEL code will create a particle system with such a creation expression. This example creates a static particle system of 9,621 particles. (As an aside, it would be a good question to ask yourself, where is the number 9,621 originating?)
{{{
{
    float $startTime = `timerX`;

    string $results[] = `particle -lowerLeft -5 -5 -5 -upperRight 5 5 5 -gridSpacing 0.5`;
    string $particleShape = $results[1];
    
    // Add the per-particle attributes that store color.
    addAttr -longName "rgbPP" -dataType vectorArray $particleShape;
    addAttr -longName "rgbPP0" -dataType vectorArray $particleShape;

    string $creationExpression =
        (
            "float $radius = 5.0;\n" +
            "position = unit(sphrand(1)) * $radius;\n" +
            "rgbPP = hsv_to_rgb(<<rand(1.0), 1.0, 1.0>>);\n"
        );
    dynExpression -string $creationExpression -creation $particleShape;

    float $elapsedTime = `timerX -startTime $startTime`;
    print("// elapsed time: " + $elapsedTime + " seconds.\n");
}
}}}
Look at the resulting creation expression in the Expression Editor. You will notice that Maya has automatically appended the name of particle system&rsquo;s shape node to the {{{position}}} and {{{rgbPP}}} attributes.

If you watched the elapsed time messages, you should see that the second version also is much faster to execute (30 times faster when I tested it). The second version also has the advantage that the expression will automatically position and color any new particles born into the system (e.g., if an emitter were added).

If you do not see colorful results from the above, be sure that you play forward past the particle system&rsquo;s {{{.startFrame}}} and are looking at the viewport in shaded mode.
!!!!Rendering
If you would like to render the results from the last code block, switch to mental ray. Then create a Surface Shader and apply it to the particle system. Then create a particleSamplerInfo node. Connect particleSamplerInfo.rgbPP to surfaceShader.outColor. Render.

You can use any material (including mia_material_x, if you like). I used a surface shader to closely mimic what we see in the 3D viewport.

Automatically creating and apply the material is left as an exercise for the reader.
!!!!References
The following nodes and command are referenced above. It would be worthwhile to examine their related documentation, in addition to the expression, particle expression and general particle documentation.

Nodes: particle
Commands: particle, hsv_to_rgb, timerX, unit, sphrand, addAttr
When working with very complex shading networks in //Maya//, the user interface can slow down because of the updating of thumbnails for the rendering nodes. These thumbnails appear in both the Hypershade and in the Attribute Editor.

There is a MEL command, {{{renderThumbnailUpdate}}}, which can be used to toggle the update process on and off. Executing the following in a MEL tab in the Script Editor or on the MEL Command Line will turn off the updating:
{{{
renderThumbnailUpdate false;
}}}
And this will turn it back on again:
{{{
renderThumbnailUpdate false;
}}}

You also could make a Shelf Button with the following MEL command:
{{{
renderThumbnailUpdate (!`renderThumbnailUpdate -query`);
}}}
Every time you click on the shelf button, the update will be toggled.

!!!!toggle_renderThumbnailUpdate.mel
Finally, you can install [[this script|inclusions-2010-fall/toggle_renderThumbnailUpdate.mel]] in your {{{maya/scripts}}} or {{{maya/}}}//{{{version}}}//{{{/scripts/}}} directory and update your {{{userSetp.mel}}} file with the following line:
{{{
source toggle_renderThumbnailUpdate;
}}}
This will create a tiny red/green button in the Main Status Line (that is the toolbar across the top of the main //Maya// window. Clicking on that button should now toggle the render thumbnail status. (The button may not show up until you move your mouse over the Status Line for the first time in a session.)

''Download:'' [[toggle_renderThumbnailUpdate.mel|inclusions-2010-fall/toggle_renderThumbnailUpdate.mel]]
If this opens the MEL script directly in your browser, go back, and right-click on the link to use the &ldquo;Download Linked File As...&rdquo; (or equivalent) command.

The script is based on code from [[this post|http://mayastation.typepad.com/maya-station/2010/05/button-to-disable-thumbnails-update-in-hypershade-mel.html]] on [[mayastation.typepad.com|http://mayastation.typepad.com/]]. I made some minor improvements.
[//This note currently is under development.//]
!!!!{{{curl}}} &mdash; [[curl.haxx.se site|http://curl.haxx.se/]]
!!!!{{{python}}} &mdash; [[python.org|http://www.python.org]]
!!!!Rendering commands
The command line version of a particular rendering package.
!!!!{{{rsync}}} &mdash; [[site|http://samba.anu.edu.au/rsync/]]
Here are some things I saw, learned, discovered and heard at [[PyCon 2011|http://us.pycon.org/2011/]]&hellip;
!!!!Day One
[[Hilary Mason|http://www.hilarymason.com/]] of [[bit.ly|http://bit.ly/]] gave the keynote address:
*Once a bit.ly link is created it will never go away or change. (Lesson: Setup your personalized bit.ly links now.)
*[[Computational thinking|http://www.cs.cmu.edu/~CompThink/]] is important. Here is [[a paper by Jeannette M. Wing|http://www.cs.cmu.edu/~CompThink/papers/Wing06.pdf]].
*[[dataists.com|http://dataists.com]]
*[[Genetic Programming: Evolution of Mona Lisa|http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/]]
*//[[Programming Collective Intelligence|http://oreilly.com/catalog/9780596529321]]// by Toby Segaran (also available through Safari Books Online through the SCAD Library).
[[Celery|http://celeryproject.org/]] &mdash; a queue for distributed tasks.

So, if you have something like {{{__init__()}}}, how do you say the &ldquo;{{{__}}}&rdquo; (two underscores) part out loud? I was saying, &ldquo;underscore, underscore, init, underscore, underscore.&rdquo; Ugh. What a mouthful. This weekend, I heard, &ldquo;under, under, init&rdquo;, and &ldquo;dunder init&rdquo; (my favorite).

[TODO: more of day 1]
!!!!Day Two
I do not know where I will use this, but I know I will use it &mdash; //[[Cog|http://nedbatchelder.com/code/cog/]]// by Ned Batchelder, a Python package which &ldquo;is a code generation tool&hellip;lets you use pieces of Python code as generators in your source files to generate whatever code you need&rdquo;. Look at the example code, it will make sense.

[TODO: more of day 2]
!!!!Day Three
[TODO: day 3]
Here is a selection of open source projects that I have found useful.

//See also// [[Python: Interesting packages and modules]]

//Many of these require compiling from C or C++ code.// Not for the faint of heart (or the impatient), but very doable and certainly worth the effort. Even if you do not compile and use, often it can be very informative to read the project pages or project documentations for a better understanding of the methods (e.g., ~OpenEXR).
*[[Alembic|http://code.google.com/p/alembic/]] &mdash; A scene geometry interchange format (a collaboration between ILM and Sony)
*[[Cortex|http://code.google.com/p/cortex-vfx/]] &mdash; A code library for software development in visual effects (with Python bindings) (Image Engine)
*[[MacPorts|http://www.macports.org/]] &mdash; For those Mac OS X people out there &mdash; This project/community works to make open source software painless to install. Many of the software dependencies of the other projects listed here  (as well as some of the projects themselves) can be install as simply as typing {{{sudo port install openexr}}}&nbsp;&mdash;&nbsp; takes care of downloading the source code and compiling, along with downloading any dependencies; I always check the //[[Available Ports|http://www.macports.org/ports.php]]// page first  when I need a new software library.
*[[OpenEXR|http://www.openexr.com/]] &mdash; An image file format which elegantly handles floating point data and an arbitrary number of channels (ILM)
*[[Partio|http://www.disneyanimation.com/technology/partio.html]] &mdash; A particle system translation toolkit; includes some nearest-neighbor utilities (Hello, Maya) and a Python interface (Walt Disney Animation Studios)
*[[Ptex|http://ptex.us/]] &mdash; An image file format and corresponding file library; specifically for texturing 3D geometry with the benefit of not requiring a UV layout (Walt Disney Animation Studios)
*[[PyMEL|http://code.google.com/p/pymel/]] &mdash; makes Python in //Maya// more Python-like (Luma Pictures)
*[[TiddlyWiki|http://www.tiddlywiki.com/]] &mdash; A Wiki-like system self contained within a single HTML file. I use a ~TiddlyWiki to maintain these notes.
/%
TODO
openframeworks
cinder
processing
%/
<div class='header'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='macro' force='true' macro='slider chkSideBarTabs SideBarTabs "index »" "display lists of tiddlers"'></div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
These notes are being developed in conjunction with a workshop I currently am teaching. Watch for updates over the next eight weeks, or so. To start, the notes will be organized on a session-by-session basis. &mdash; KAH (17 January 2012)

''IMPORTANT:'' When viewing these notes, be certain to manually refresh your browser. Sometimes an older, cached version will show up by default. Refreshing guarantees the latest stuff&hellip;

!!!!Session 5 &mdash; 21 February 2012

&ldquo;You don&rsquo;t make a photograph just with a camera. You bring to the act of photography all the pictures you have seen, the books you have read, the music you have heard, the people you have loved.&rdquo; &mdash; Ansel Adams

[[Anna Paola Guerra|http://www.flickr.com/photos/annapaolaguerra/]] (flickr) has an interesting stream of photographs posted. There is a very consistent feel to the images, especially on the first few pages. The specific subjects vary dramatically, but the mood remains consistent. What is it about the photographs that imparts this consistency of artistic vision?

As another example of this type of pervasive consistency, [[Hengki Koentjoro|http://koentjoro.com/portfolio.php]] and/or on [[flickr|http://www.flickr.com/photos/21290636@N06/]].

//The Big Picture// blog has a nice entry on [[photographed reflections|http://www.boston.com/bigpicture/2012/02/photo_reflections.html]].

!!!!Session 4 &mdash; 14 February 2012
&ldquo;Nobody sees a flower &mdash; really &mdash; it is so small it takes time &mdash; we haven’t time &mdash; and to see takes time, like to have a friend takes time.&rdquo;<br>&mdash; Georgia O&rsquo;Keefe (photograph by [[Alfred Stieglitz|http://www.metmuseum.org/toah/hd/stgp/hd_stgp.htm]])
*[[Pigeon photography|http://en.wikipedia.org/wiki/Pigeon_photography]]
*[[Heinz Maier|http://www.thisiscolossal.com/2011/10/high-speed-liquid-and-bubble-photographs-by-heinz-maier/]]
*[[Maruyama Shinchi|http://shinichimaruyama.com/]]
*[[Luca Piero|http://www.flickr.com/photos/sottounponte/6684583905/]]
*[[Susan Dergas|http://www.susanderges.com/]]
*Andy Goldsworthy &mdash; [[Digital Catalog|http://www.goldsworthy.cc.gla.ac.uk/]] (1976-1986)
*&Eacute;tienne-Jules [[Marey|http://en.wikipedia.org/wiki/Étienne-Jules_Marey]]
*[[Hengki Koentjoro on flickr|http://www.flickr.com/photos/21290636@N06/6085212110/]]
*[[Harold Eugene Edgerton|http://en.wikipedia.org/wiki/Harold_Eugene_Edgerton]]
*[[Light paintings for Hans Finsler|http://sammlungen-archive.zhdk.ch/code/emuseum.asp?emu_action=searchrequest&newsearch=1&moduleid=1&profile=objectsde&currentrecord=1&searchdesc=Lichtzeichnungen&style=single&rawsearch=id/,/is/,/53230/,/false/,/true]]
*[[Barry Underwood|http://barryunderwood.com/]]
*Terence Chang&rsquo;s //[[SFO crunch|http://www.flickr.com/photos/exxonvaldez/3734764542/]]//
*Sea Moon&rsquo;s (flickr) //[[Intentional rods|http://www.flickr.com/photos/14833125@N02/sets/72157624138757418/with/4640703751/]]//
*[[Emmet Gowin|http://www.geh.org/ne/str085/htmlsrc5/gowin_sld00001.html]]
*//[[2010 National Geographic Photography Context|http://www.boston.com/bigpicture/2010/11/national_geographics_photograp.html]]//
*[[Alexey Titarenko|http://www.alexeytitarenko.com/]]
*[[Justin Quinnell|http://www.pinholephotography.org/]] &mdash; pinhole photography
*[[Brice Bischoff|http://www.bricebischoff.com/]]
*[[Tom Lacoste|http://www.flickr.com/photos/tomlacoste/sets/72157626947212088/]]
*[[Shorpy|http://www.shorpy.com/]] &mdash; bunches and bunches of old, higher-resolution-than-you-might-normally-find photographs
*Someone in class passed along a link to &ldquo;[[70 Imaginative Examples Of Conceptual Photography|http://photo.tutsplus.com/articles/inspiration/70-imaginative-examples-of-conceptual-photography/]]&rdquo;
*Here are two post on //The Online Photographer// blog regarding aspect ratios and raising the question about square aspect ratios: [[one|http://theonlinephotographer.typepad.com/the_online_photographer/2012/01/why-not-square-sensors.html]] and [[two|http://theonlinephotographer.typepad.com/the_online_photographer/2012/01/squares.html]]
*whew.

aspect ratio articles [TODO]

!!!!Session 3 &mdash; 7 February 2012
Artist shown: [[Lionel Catelan|http://www.lionelcatelan.com/]]

//Suggested experiments before next session//

We saw today that while we can produce the same equivalent exposure using a wide range of combinations of aperture, shutter speed, and sensitivity/ISO, there are visual differences between the photographs. It is good to get a working sense of the exposure limits, both high and low, of a given camera and lens combination. Try to take at least three series of images in which one of aperture, shutter speed or sensitivity/ISO is kept constant and the other two exposure attributes are varied to maintain a constant exposure over the entire series of photographs.

For example, set ISO to 100 (this will be the constant). Now set your aperture to the minimum //f//-stop number (ie, the maximum aperture). Now find a shutter speed that gives you a well exposed image (not too dark or too light). This will be the first image in the sequence. Now increase the //f//-stop number (decrease the aperture) by one stop and decrease the shutter speed by a factor of 2 (2 times the original shutter speed, eg, 1/30 second becomes 1/15 second). You should see the same exposure value. Keep up this progression until you reach the maximum //f//-stop number (minimum aperture area) on your camera.

!!!!Session 2 &mdash; 31 January 2012
&ldquo;There is nothing worse than a sharp image of a fuzzy concept.&rdquo; — Ansel Adams

Artists/research mentioned/shown:
*Jack Germsheid&rsquo;s //[[I Am The Camera|http://www.flickr.com/photos/portal23/4336140485/]]//.
*[ ''updated'' ] Kim Keever &mdash; [[Some work|http://www.ktfineart.com/artists/kim_keever/]] and a [[video|http://newarttv.com/Kim+Keever]]. Also, //[[Eroded Man|http://butdoesitfloat.com/2618153/For-we-shall-make-after-all-a-fair-conclusion-to-this-brief-music]]// by Keever.
*[[Edward H. Adelson|http://persci.mit.edu/people/adelson]] &mdash; //[[Checkershadow Illusion|http://persci.mit.edu/gallery/checkershadow]]// and //YouTube// user brusspup&rsquo;s [[video demonstrating the proof|http://www.youtube.com/watch?v=z9Sen1HTu5o]]. (Looks like [[brusspup|http://www.youtube.com/brusspup]] has some other interesting illusion-related videos as well.)
*[[Missing letters photograph|http://www.flickr.com/photos/phospho/5332453505/]] (phospho on //flickr//).
*Jos&eacute; Antonio Mill&aacute;n&rsquo;s //[[Ghost Buildings|http://jamillan.com/medianeraw.htm]]//.
*Jason Mena&rsquo;s //[[Untitled (2009)|http://www.jasonmena.com/index.php?/images/untitled-oblique/]]// and //[[Lights out for the territory|http://www.jasonmena.com/index.php?/projects/2007/]]//.
*Chris Harding&rsquo;s //[[We The Robots|http://www.chrisharding.net/wetherobots/]]//.
*[[Fiona Watson|http://www.fionawatson.co.uk/]] and her //[[flickr|http://www.flickr.com/photos/wildgoosechase/]]//.
*Ga&euml;lle Villedary&rsquo;s //[[Tapis Rouge|http://www.gaellevilledary.net/#!tapis-rouge]]//.
Before our next session, you should find the manual (or the-most-manual-possible) mode on your camera. You should look for specific controls for aperture, shutter speed and sensitivity (ISO), the exposure meter and exposure compensation. If you have not already done so, please share a folder/directory with me via dropbox.com and send me an email with your project idea(s). Also, please remember to bring your camera to next session.

!!!!Session 1 &mdash; 17 January 2012
*Artists mentioned/shown:
**[ ''updated'' ] Kim Keever &mdash; [[Some work|http://www.ktfineart.com/artists/kim_keever/]] and a [[video|http://newarttv.com/Kim+Keever]]. Also, //[[Eroded Man|http://butdoesitfloat.com/2618153/For-we-shall-make-after-all-a-fair-conclusion-to-this-brief-music]]// by Keever.
**[[Robert and Shana ParkeHarrison|http://www.parkeharrison.com/]]
**[[Jan Van Holleban|http://www.janvonholleben.com/?page_id=4]]
**[[David Hockney|http://www.hockneypictures.com/photos/photos_collages.php]]
*Quotations cited:
**&ldquo;The single most important component of a camera is the twelve inches behind it.&rdquo;<br>&mdash; Ansel Adams
**&ldquo;Begin anywhere.&rdquo;<br>&mdash; John Cage
**&ldquo;We are most truly ourselves when we achieve the seriousness of a child at play.&rdquo;<br>&mdash; Heraclitus of Ephesus (There is a variation on this quote by Nietzsche, by I prefer the Heraclitus version.)
*Other references (not mentioned in class):
**Andy Ilachinski&rsquo;s //[[Tao of Photography|http://tao-of-digital-photography.blogspot.com/]]// is one of my favorite photography/creativity blogs.
Before our next session, I would like you to watch/read the following:
*A ~TEDx presentation by photography Chris Orwig, &ldquo;[[Finding the magnificent in the mundane|http://www.youtube.com/watch?v=78ARBe2JCXw]]&rdquo;.
*Read [[this blog post by Julieanne Kost|http://blogs.adobe.com/jkost/2012/01/relationships-between-events.html]] and then watch [[this video slideshow.|http://tv.adobe.com/watch/adobe-evangelists-julieanne-kost/passing-time-moments-alone/]]
*Ze Frank on [[ideas and brain crack|http://www.zefrank.com/theshow/archives/2006/07/071106.html]]. (F-bomb warning.)
[TODO: //This note currently is under development.//]

As you explore procedural techniques, at some point, be it sooner or later, you will become frustrated with trying to implement your ideas using purely graphical, node-based systems. When you reach that point, it is time to start writing code.

The notes and references below are a mix of systems with graphical user interfaces and programming, sometimes combining the two.

//See also:// [[Houdini notes|Houdini: Links]]
!!!!Examples
*Jared Tarbell&rsquo;s [[complexification.net|http://complexification.net/]] and its //[[Gallery of Computation|http://complexification.net/gallery/]]//. Some beautiful examples of procedural techniques used to create complex patterns. [[Processing|http://www.processing.org]] source code is provided for many of the pieces.
*Kevin Webster&rsquo;s //[[metacosm project|http://rabidpraxis.com/projects/metacosm_project/]]//. Kevin has posted a number of the generated videos on [[vimeo|http://vimeo.com/kevinwebster/videos/]] and has followed up the project with a new work-in-progress, //[[the metacosm project redux|http://rabidpraxis.com/projects/metacosm_project_redux/]]//.
*&hellip;
!!!!Behavioral animation
*[[The mathematics of fish schools and flocks of humans|http://arstechnica.com/science/news/2011/02/the-mathematics-of-fish-schools-and-flocks-of-humans.ars]]
*Craig Reynolds &mdash; [[boids|http://www.red3d.com/cwr/boids/]] and [[behavioral animation links|http://www.red3d.com/cwr/steer/]].
*[[Couzin Lab at Princeton|http://icouzin.princeton.edu/]]
*&hellip;
!!!!Morphogenesis (and ~L-Systems)
Here is a site with [[a very nice introductory explanation|http://www.selcukergen.net/ncca_lsystems_research/lsystems.html]] of [[L-Systems|http://en.wikipedia.org/wiki/L-system]], some [[specific information for implementation|http://www.selcukergen.net/ncca_lsystems_research/houdini.html]] in //Houdini// and [[some examples|http://www.selcukergen.net/ncca_lsystems_research/research.html]].

[[cmiVFX|http://www.cmivfx.com/]] has a a couple of videos, //Houdini ~L-System Essentials, [[Volume 1|http://www.cmivfx.com/productpages/product.aspx?name=Houdini_L-Systems_Vol_1]] and [[Volume 2|http://www.cmivfx.com/productpages/product.aspx?name=Houdini_L-Systems_Vol_2]]//, that are dense introductions to the ~L-Systems in //Houdini//. Worth investment and study.

And do not forget about //[[The Algorithmic Beauty of Plants|http://algorithmicbotany.org/papers/#abop]]// and the entire [[AlgorithmicBotany.org*|http://algorithmicbotany.org/]] site, which includes [[an exhaustive publications list*|http://algorithmicbotany.org/papers/]] and //[[Visual Models of  Morphogenesis: A Guided Tour|http://algorithmicbotany.org/vmm-deluxe/TitlePage.html]]//, a wonderful survey of algorithmic techniques for modeling the development of organic forms (and many inorganic forms as well).

"""*""" //Of/for the Biological Modeling and Visualization research group in the Department of Computer Science at the University of Calgary.//
!!!!Processing (visual programming environment/language/international community)
*[[processing.org|http://www.processing.org]]
*&hellip;

&hellip;
!!!!Tutorials and other learning resources
*http://www.webmonkey.com/2010/02/get_started_with_python/ &mdash; A great basic introduction to Python.
*http://docs.python.org/tutorial/ &mdash; An introduction to Python.
*--http://python.sourcequench.org/ &mdash; A series of videos based on the &ldquo;official&rdquo; Python tutorial above. Fifty-one //soup&ccedil;ons// of Python, most around five minutes each. The videos also are available as [[a podcast on iTunes.|http://itunes.apple.com/us/podcast/python-osmosis/id317462382]] I recommend that you treat this as a supplement to other materials.-- UPDATE: As of 29 June 2012, the iTunes podcast version seems to have gone off-line. The web site version currently is difficult to navigate and also is missing episodes. I have left the links active, if you want to investigate.
*The [[Google Python Class|http://code.google.com/edu/languages/google-python-class/]] assumes very little previous programming knowledge and is a good introduction to the language (includes videos, write-ups and exercises). The course originator and lecturer, [[Nick Parlante|http://www-cs-faculty.stanford.edu/~nick/]] also has created [[codingbat.com|http://codingbat.com/python]], a site with on-line exercises for Python (and Java).
*The MIT ~OpenCourseWare version of //[[6.00SC Introduction to Computer Science and Programming|http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00sc-introduction-to-computer-science-and-programming-spring-2011/]]// doesn&rsquo;t pull any punches and will take you from {{{0}}} to {{{111100}}}, using Python as the instructional language.
!!!!References, documentation and resources
*http://python.org/doc/ &mdash; Links to documentation for both current and past versions of Python.
*//[[Python Wiki|http://wiki.python.org/moin/FrontPage]]// &mdash; A range of Python-related information. A good departure point for exploration.
*[[PEP 8, Style Guide for Python|http://www.python.org/dev/peps/pep-0008/]] &mdash; The basics of coding style in Python from the source.
*Asking Google a question works very well. Typically the first link will be to the Python 2.x documentation for a particular topic. [[Example.|http://www.google.com/search?hl=en&q=python+string+iterator]]
*These tend to get a bit deeper&hellip;some very good Python information on IBM&rsquo;s developer site: http://www.ibm.com/developerworks/views/linux/libraryview.jsp?search_by=python
*Python is easily extended with additional module packages. The [[Python Package Index (PyPI)|http://pypi.python.org/pypi]] contains a list of many available packages. (//See also// [[Python: Interesting packages and modules]].)
*Doug Hellmann does very nice breakdowns of various Python modules in his [[Python Module of the Week|http://www.doughellmann.com/PyMOTW/]].
*http://stackoverflow.com/questions/tagged/python &mdash; crowd-sourced answers to programming questions. This link is for questions tagged &ldquo;Python&rdquo;.
*[[Python: Interesting packages and modules]] &mdash; Based on my experiments, research and explorations.
!!!!Book-like websites
These are fully-developed online books.
*http://www.diveintopython.net/ &mdash; An on-line book (in many forms) that is a good introduction.
*http://learnpythonthehardway.org/ &mdash; //Learn Python The Hard Way//
!!!!Books
*//[[Learning Python, Fourth Edition|http://oreilly.com/catalog/9780596158071/]]// by Mark Lutz is an extensive volume (> 1,200 pages) that provides an excellent, detailed introduction to the language. This fourth edition covers both 2.x and 3.x.
*A book that sits beside me when programming in Python: //[[Python Pocket Reference|http://oreilly.com/catalog/9780596158095]]// by Mark Lutz. The fourth edition covers both 2.x and 3.x.
*For those at SCAD, O&rsquo;Reilly Safari Books Online is now available through the SCAD Library. ([[link|http://0-proquest.safaribooksonline.com.library.scad.edu/]]) Many books on digital media programming are available, including the two above. (As of Spring Quarter 2011.)
!!!!&Uuml;ber-geeky zone
Here is some advanced and/or interesting stuff that does not fit elsewhere on this page.
*There was an excellent [[PyCon 2010 talk by Brandon Craig Rhodes regarding the internal workings of dictionaries in Python.|http://pyvideo.org/video/276/pycon-2010--the-mighty-dictionary---55]] Explains a bit about sets and hash tables as well.
*David Beazley has two presentations, one on [[Python generators|http://www.dabeaz.com/generators-uk/]] and a follow-up presentation on [[coroutines and concurrency|http://www.dabeaz.com/coroutines/]], both of which I found very useful and which reframed my experiences with generators. ~PDF files of the presentations slides and source code examples are provided.
*[[Python history|http://python-history.blogspot.com/]] from its creator, [[Guido van Rossum|http://www.python.org/~guido/]]. Here is a post about the [[design philosophy of Python|http://python-history.blogspot.com/2009/01/pythons-design-philosophy.html]] and one which gives an [[introduction and overview of the language|http://python-history.blogspot.com/2009/01/introduction-and-overview.html]].
!!!!Ken&rsquo;s notes
*[[Bit-wise manipulations in Python|Python: Bit-wise manipulations]]
*[[Of interest at PyCon 2011]] &mdash; Some notes on things that interested me at [[PyCon 2011|http://us.pycon.org/2011/]].
*[[Python: Tidbits]] &mdash; Small bits of Python that do not fit elsewhere.
!!!!Ken&rsquo;s class projects and exercises
Here instructions for some exercises that I present to students in some of my programming classes. They roughly are in order by level of difficulty.
*[[Ishtime|TECH 312: Ishtime assignment]]
*[[Sentence generator|TECH 312: Sentence generator assignment]]
*[[Fit function|TECH 312: Fit function assignment]]
*[[Data parsing|TECH 312: Data parsing assignment]]
//Maya//-specific:
*[[Randomize transforms module|TECH 312: Randomize transforms assignment]]
*[[Light with ramp falloff|TECH 312: Light with ramp falloff assignment]]

!!!!Versions of Python
Pay attention to the version of Python you are using, whether from the command line or embedded in a host application.

To find the version number of a given Python interpreter, enter the following at the command line:
{{{
python -V
}}}
or enter the following, once you are in an interactive Python shell:
{{{
import sys
print sys.version
}}}

In systems and applications that I use on a regular basis, Python version numbers range from 2.5.1 to 2.7.2 (as of 27 January 2011).

There are differences between the versions. When looking at literature, be particularly careful you are not looking at information for Python 3.x unless you mean to do so. There are major changes from 2.x to 3.x. The python.org site has documentation for each version.

If something you are reading (in text or in code) seems inconsistent with what you already know of Python, it is very likely that you are looking at something written for a different version of the language. Typically, there is one obvious way of doing something in Python, a design philosophy that makes learning the lanaguage easier. That said, the creators of the language are not afraid to change things if a better way is discovered. Those changes almost always lead to greater consistency, elegance and legibility.

Currently, I recommend ignoring the 3.x version of Python until it is integrated into software tools that you use on a regular basis (e.g., Maya, Houdini, Nuke, etc.). That&rsquo;s what I am doing, mostly.

There is [[a good should-I-use-version-2-or-version-3 essay|http://wiki.python.org/moin/Python2orPython3]] on the //[[Python Wiki|http://wiki.python.org/moin/FrontPage]]//.

!!!!Be careful of your whitespace(s)
Python is very strict about the use of whitespace in code, especially indentation. What is optional (although highly encouraged) in many lanaguages is mandatory and functional in Python. All of the references above will cover this.

Watch out also for empty lines, or apparently empty lines, at the end of your Python module files. Sometimes, people seem to accumulate these empty lines (or lines with tabs or spaces). Whitespace dust bunnies. Sometimes, this will upset the Python (especially the Python in Maya, I have found). A symptom of this problem would be Maya complaining of a syntax error on the last line of your code. Happens with Python in Houdini as well. Some text editors have an option to strip whitespace from the end of files upon saving.

!!!!Python, reveal thyself
There two built-in Python functions, {{{help(}}}//{{{object_goes_here}}}//{{{)}}} and {{{dir(}}}//{{{object_goes_here}}}//{{{)}}}, that are great tools for exploration. Remember, everything in Python is an object...

!!!!Python search path
If you define an environment variable, {{{$PYTHONPATH}}}, which contains a colon ({{{:}}}) separated list of directories, Python will search those directories when it is looking for modules referenced by the {{{import}}} command. Such environment variables typically are defined in a login script.

Additionally, you can used the Python {{{sys}}} module to modify the search path on a session-by-session basis. Any changes you make with this method will be lost when you end the current session of Python.
{{{
import sys
print sys.path # will print a list containing the directories in the search path
sys.append('/my/new/directory') # adds /my/new/directory to the search path
}}}

When you look at the directories in the search path with {{{sys.path}}}, you will notice many more than are defined in {{{$PYTHONPATH}}}. These additional directories automatically are added by Python, upon start up, and are part of the structure that allows multiple versions of Python to peacefully co-exist on the same system.

If you wanted to make the same addition as above, but with $PYTHONPATH, you might write something like the following in a bash startup script:
{{{
export PYTHONPATH=/my/new/directory:${PYTHONPATH}
}}}

Notice the use of {{{$PYTHONPATH}}} at the end of the line. This recalls the previous value of the variable when defining the new value, thus not completely replacing the previous list of directories.

!!!!Python in Maya
If you will be using Python in Maya, I strongly recommend using ~PyMEL: http://code.google.com/p/pymel/. A version of ~PyMEL has been included with //Maya// since version 2011. For earlier versions of Maya, you will need to download and install ~PyMEL.

Once you start using ~PyMEL, you should refer first to the ~PyMEL documentation for Maya commands.

!!!!Python in Houdini
In Houdini, the integration of Python is very thorough, but parameter expressions will be more verbose than in ~HScript:

Hscript:
{{{
($TX * cos($BBY * ch("bend"))) - ($TY * sin($BBY * ch("bend")))
}}}
Python:
{{{
(lvar('TX') * cos(lvar('BBY') * radians(ch('bend')))) - (lvar('TY') * sin(lvar('BBY') * radians(ch('bend'))))
}}}

You can set the default expression language for an operator (node) with the Parameters pane. You also can set the expression language on a parameter-by-parameter basis using the RMB menu. The [[Python parameter expressions|http://localhost:48626/hom/expressions]] page in the documentation describes how to switch the expression language settings (you will need to have Houdini running on your system for this link to work). The documentation for ~HScript and ~HScript expression functions has &ldquo;replaced by&rdquo; links to the Python equivalents.

Many objects in the HOM (Houdini Object Model) still have features that have not been implemented in Python, but the documentation gives good indications of implementation status.

A [[September 2010 video masterclass from SideEffects|http://www.sidefx.com/index.php?option=com_content&task=view&id=1810&Itemid=305]] gives a very good overview of Python in //Houdini// with emphasis on //Houdini// version 11.

[[Codename: Stonehenge]] describes a custom Python SOP in Houdini to read pointcloud data generated by [[Photosynth|http://photosynth.net/]].
28 June 2012 &mdash; Hi. This note is a //bit// of a work-in-progress. Should be finalized in the next couple of days&hellip;

-----

Python has operators for bit-wise manipulations of integers. This means that we can modify integers at the bit level, the binary 1s and 0s that are the lifeblood of all things digital. This note covers some of the bit-wise techniques available in Python.
!!!!!~Ac-Cent-Tchu-Ate the Positive
For the sake of this note, and with apologies to [[Arlen and Mercer|http://en.wikipedia.org/Ac-Cent-Tchu-Ate_the_Positive]], I very purposefully am //eliminating the negative// integers and the complexities of floating-point numbers. //Eliminating// is a bit harsh. How about //ignoring//? See the references below if you want to dive a bit deeper. //Bit// deeper...get it? Okay, I&rsquo;ll stop.

This note is based on Python 2.7. Other versions may require workarounds, some of which are indicated below.

-----

Python allows for integers of arbitrary magnitude. This, in turn, means that we can just as easily work with an integer like {{{42}}} as with an integer like {{{17976931348623159077293051907890247336174137221L}}} (how obnoxious is that?). We also can think of these arbitrarily-long integers as containers of sequences of bits; {{{42}}} would be {{{101010}}} and {{{17976931348623159077293051907890247336174137221L}}} would be {{{11001001...138 bits removed for brevity...10000101}}}.

In the context of bit-wise manipulation, integers can be likened to strings. A string contains a sequence of characters and an integer can be thought to contain a sequence of bits. The analogy breaks down at this point, because we cannot directly index or slice the bits of an integer. We have to use the bitwise operators below or, if you need to do extensive bit-wise work, you might look at the [[bitarray|https://github.com/ilanschnell/bitarray]] module. The bitarray module does support more sophisticated indexing, slicing and a host of other features.

There is a method for writing integer values in Python in binary form:
{{{
>>> 0b100101
37
}}}
You can use this notation throughout Python, {{{0b}}}, followed by a sequence of {{{0}}}s and {{{1}}}s. Python then will interpret that value as an integer. Notice that there are //not// any quotations marks involved. These are not strings. However, the {{{bin()}}} function, //does// return strings&hellip;

As of Python 2.6, there is a built-in function, {{{bin()}}}, which returns a binary representation of an integer. For example:
{{{
>>>bin(42) # notice that bin() returns a string
'0b101010'
>>> bin(257)
'0b100000001'
}}}
The {{{bin()}}} function returns a variable-length string, showing only the number of binary digits necessary to represent the number.

Generally, in Python, the term //representation// indicates a string, the contents of which could be used to reconstruct an object. The built-in {{{repr()}}} can be used to return representations of any object. Behind the scenes, the {{{repr()}}} function calls the object&rsquo;s {{{__repr__()}}} method. If the object&rsquo;s class does not implement {{{__repr__()}}}, the {{{__str__()}}} method is called.

The fact that {{{bin()}}} returns a variable-length string can make some of what follows a //bit// (I haven&rsquo;t stopped, have I?) difficult to follow. If you are comparing two different bit patterns, corresponding bits won&rsquo;t necessarily align visually. Here is a function which returns a bit pattern for a number, but with a fixed number of digits:
{{{
def bits(value, width=32, chunk=8):
    '''
    Returns a string contains the bit pattern for the provided integer.

    The width argument determines the number of 1s and 0s that will be returned.
    The chunk argument delimits a given number of bits with spaces.
    '''
    if value < 0:
        raise ValueError, 'value must be >= 0'

    if value >= 2**width:
        raise ValueError, ('value to large for width=%d; increase width to >= %d'
                            % (width, len(bin(value)) - 2))

    value += 2**width
    s = bin(value)[-width:]
    return ' '.join([s[i:i + chunk] for i in xrange(0, len(s), chunk)])
}}}
This makes it a bit easier to visually see corresponding bits between multiple patterns:
{{{
>>> bits(42) # notice that bits() also returns a string
'00000000 00000000 00000000 00101010'
>>> bits(257)
'00000000 00000000 00000001 00000001'
}}}
Both {{{bits()}}} and the {{{bin()}}} functions are visualization tools for what follows. This is not about doing string manipulation on the {{{'00000000 00000000 00000001 00000001'}}}, but rather manipulating the bits numerically using bit-wise operators.
!!!!Interpreting those bits
When we say that {{{0b101010}}} is equivalent to the number 42, what do we mean? How do we translate those {{{0}}}s and {{{1}}}s (//binary//, or //base-2//, representation) into the numbers we are accustomed to using, like 42 (//decimal//, or //base-10//, notation)? Here is a quick version.

The digits of binary numbers represent the least change in magnitude toward the right, with the change increasing as we move further to the left, the same as with decimal numbers. Each one of the binary digits represents its value (0 or 1), multiplied by some respective power of 2. For {{{23}}}, the binary equivalent is {{{0b10111}}} and here is a breakdown of those binary digits:
{{{
1              0              1              1              1            

1 * (2**4)  +  0 * (2**3)  +  1 * (2**2)  +  1 * (2**1)  +  1 * (2**0)   

1 *   16    +  0 *   8     +  1 *   4     +  1 *   2     +  1 *   1      

      16    +        0     +        4     +        2     +        1      = 23
}}}
/%
{{{
| 1          | 0          | 1          | 1          | 1          |
| 1 * (2**4) | 0 * (2**3) | 1 * (2**2) | 1 * (2**1) | 1 * (2**0) |
| 1 *   16   | 0 *   8    | 1 *   4    | 1 *   2    | 1 *   1    |
        16   +       0    +       4    +       2    +       1      = 23
}}}

|{{{1}}}|{{{0}}}|{{{1}}}|{{{1}}}|{{{1}}}|
|{{{1 * (2**4)}}}|{{{0 * (2**3)}}}|{{{1 * (2**2)}}}|{{{1 * (2**1)}}}|{{{1 * (2**0)}}}|
|{{{1 * 16}}}|{{{0 * 8}}}|{{{1 * 4}}}|{{{1 * 2}}}|{{{1 * 1}}}|
|{{{16}}}|{{{0}}}|{{{4}}}|{{{2}}}|{{{1}}}|

{{{
16 + 0 + 4 + 2 + 1 = 23
}}}
%/Notice that as we move from right to left, the powers or exponents of 2 increase, starting from 0 on the far left. Something that always catches me is that fact that any number to the power of zero is 1.
{{{
>>> 2**0
1
>>> 1234567890**0
1
}}}

How many values can be represented with 5-bits? {{{2**4 + 2**3 + 2**2 + 2**1 + 2**0 + 1}}}, which is 32. Why the last {{{+ 1}}}? It accounts for the case of {{{00000}}}. Another way to think about it is that we can represent the values 0&ndash;31, which is a total of 32 unique values.

For every bit we add to the mix, we double the number of possible values that can be represented. One of my favorite examples of the this idea is //[[Every Icon|http:www.numeral.com/eicon.html]]// by John F. Simon, Jr. The project explores, sequentially, all of the possible combinations of 32x32 1-bit pixels. That would be {{{2**(32*32)}}} possible icons. That ends up being a base-10 number with 308 digits.

As another way to approach this, try the following in the interactive Python interpreter and think about the results (this requires Python 2.7 or greater):
{{{
>>> for i in range(32):
...     print i, i.bit_length(), bin(i)
}}}

{{{int.bit_length()}}} is a method on {{{int}}} objects that returns the minimum number of bits required to store the integer. If you want to check the bit length of an ar//bit//rary integer, you can do this:
{{{
>>> (4235).bit_length()
13
>>> bin(4335)
'0b1000011101111'  # there are 13 0s and 1s after the '0b'
}}}
If you do not put the parentheses around the number, {{{(4235)...}}} in this case, Python will get confused.

If you are using Python 2.6, the following would be the equivalent of {{{int.bit_length()}}}:
{{{
# this assumes integers >= 0
def bit_length(i):
    return len(bin(i)) - 2
}}}

!!!!Bit-wise operators
Python has bit-wise operators for the following:

|and|{{{&}}}|
|or (inclusive)|{{{|}}}|
|xor (exclusive or)|{{{^}}}|
|invert (not)|{{{~}}}|
|left shift|{{{<<}}}|
|right shift|{{{>>}}}|

We will start with some definitions and examples of each individual operator and then move on to a couple/few practical examples.

!!!!!Bit-wise and ({{{&}}})
The bit-wise //and// operator ({{{&}}}), takes two operands and returns an integer representing a bit pattern which contains a 1 if both corresponding bits in the operands were 1, and a 0 for all other cases.

Here is a //truth table// for the {{{&}}} operator:

|//expression//|//result//|
|{{{0 & 1}}}|{{{0}}}|
|{{{1 & 0}}}|{{{0}}}|
|{{{1 & 1}}}|{{{1}}}|
|{{{0 & 0}}}|{{{0}}}|

In this and similar tables that follow, {{{0b}}} will be left off in the expressions. I will write, {{{0 & 1}}}, instead of {{{0b0 & 0b1}}}. This means that I am representing the operands as integers, which is a valid thing to do. This works for the integers 0 and 1, but no other numbers. For example, {{{101}}} is very different from {{{0b101}}} (a difference of 96, to be precise).

{{{
>>> 0b100101, 0b110001
(37, 49)
>>> bits(0b100101) # 37
'00000000 00000000 00000000 00100101'
>>> bits(0b110001) # 49
'00000000 00000000 00000000 00110001'
>>> bits(37 & 49)
'00000000 00000000 00000000 00100001'
>>> 37 & 49
33
>>> bits(33)
'00000000 00000000 00000000 00100001'
}}}

!!!!!Bit-wise or (inclusive) ({{{|}}})
The bit-wise //or// operator ({{{|}}}), takes two operands and returns an integer representing a bit pattern which contains a 1 if either corresponding bits in the operands were 1 or if both were 1. A bit is set to 0 if both corresponding bits in the operands were 0.

|//expression//|//result//|
|{{{0 | 1}}}|{{{1}}}|
|{{{1 | 0}}}|{{{1}}}|
|{{{1 | 1}}}|{{{1}}}|
|{{{0 | 0}}}|{{{0}}}|

{{{
>>> bits(0b100101) # 37
'00000000 00000000 00000000 00100101'
>>> bits(0b110001) # 49
'00000000 00000000 00000000 00110001'
>>> bits(37 | 49)
'00000000 00000000 00000000 00110101'
}}}

!!!!!Bit-wise or (exclusive, a.k.a. xor) ({{{^}}})
The bit-wise //xor// operator (//exclusive or//) ({{{^}}}), takes two operands and returns an integer representing a bit pattern which contains a 1 if one of the corresponding bits in the operands was 1. If both were 1 or both were 0, xor returns 0 for the corresponding bit. //One or the other, but not both.//

|//expression//|//result//|
|{{{0 ^ 1}}}|{{{1}}}|
|{{{1 ^ 0}}}|{{{1}}}|
|{{{1 ^ 1}}}|{{{0}}}|
|{{{0 ^ 0}}}|{{{0}}}|

{{{
>>> bits(0b100101) # 37
'00000000 00000000 00000000 00100101'
>>> bits(0b110001) # 49
'00000000 00000000 00000000 00110001'
>>> bits(37 ^ 49)
'00000000 00000000 00000000 00010100'
}}}


!!!!!Bit-wise inverting (logical not) ({{{~}}})
This one is easy and unary. The bit-wise //invert// operator, {{{~}}}, flips all of the bits. {{{1}}}s become {{{0}}}s and {{{0}}}s become {{{1}}}s. The equivalent of logical //not// (not //[[knot|http://www.kennethahuff.com/Works/WorkGroupThumbnails.php?g=Knots]]//).

|//expression//|//result//|
|{{{~1}}}|{{{0}}}|
|{{{~0}}}|{{{1}}}|

So, this is where my wanting to ignore negative numbers becomes an issue. I won&rsquo;t go into it here. [[This article|http://wiki.python.org/moin/BitwiseOpertors]] discusses the situation in &ldquo;Preamble: ~Twos-Complement Numbers&rdquo; (top of the page). Hmmmm&hellip; I think I need to modify {{{bits()}}}&hellip;

!!!!!Bit-wise shifting ({{{<<}}} and {{{>>}}})
The two bit-shifting operators slide bits to the left {{{<<}}} and to the right {{{>>}}}. Bit positions that are &ldquo;vaccated&rdquo; are filled with zeros. Some examples:
{{{
>>> bits(37)
'00000000 00000000 00000000 00100101'
>>> bits(37 << 4) # shift the bits 4 places to the left
'00000000 00000000 00000010 01010000'
>>> bits(416)
'00000000 00000000 00000001 10100000'
>>> bits(416 >> 3) # shift the bits 3 places to the right
'00000000 00000000 00000000 00110100'
}}}

-----
!!!!A &ldquo;practical&rdquo; example &mdash; Extracting RGB data from a 16-bit unsigned integer
In [[Codename: Stonehenge]], I describe a custom surface operator in Houdini that extracts pointcloud information from data generated by [[Photosynth|http://photosynth.net/]]. Data for each of the points is stored in a 14-byte packet. The first 12 bytes describe the position as 3 32-bit floating point numbers. The final 2 bytes are an unsigned, 16-bit integer. //Unsigned// means that the integer only represents positive numbers, so all 16 bits are dedicated to the value. But&hellip;these 16 bits contain the red, the green //and// the blue color channels for the point. The left-most 5 bits, contain the information for the red channel, the middle 6 bits contain the green and the 5 rightmost bits contain the blue. As an example, the unsigned integer 21063 (extracted from an sample file):

{{{
>>> bin(21063)
'0b101001001000111'
}}}

So how to turn 21063 ({{{0101001001000111}}}) into three separate values? And, in my case, I wanted them to be three separate floating-point numbers in the range [0.0&ndash;1.0].

We visually can split up the bits:

|Red|Green|Blue|
|{{{01010}}}|{{{010010}}}|{{{00111}}}|

But that only is an illustration, it does not give us numbers. We will tackle this one color at a time.

!!!!!Red
In order to interpret {{{01010}}} as a useful number for red in this case, we need to shift those bits to the right so that they appear on the far right instead of the far left. 11 positions:
{{{
>>> bits(21063, width=16, chunk=16)
'0101001001000111'  # original
>>> bits(21063 >> 11, width=16, chunk=16)
'0000000000001010'  # shifted
>>> 0b0000000000001010
10  # the integer value for those bits
>>> 10 / 31.0  # 31.0 is the maximum value, as a float, that could be represented
0.3225806451612903
>>> (21063 >> 11) / 31.0  # all in one step
0.3225806451612903  # that is our normalized red value
}}}
Notice that after the shift operation, the spaces to the left of our &ldquo;target&rdquo; bits are filled with zeros.

One down, two to go.

!!!!!Green
To get the green bits shifted to the right, we can {{{21063 >> 5}}}:
{{{
>>> bits(21063, width=16, chunk=16)
'0101001001000111'
>>> bits(21063 >> 5, width=16, chunk=16)
'0000001010010010'
}}}}
But we still have those 5 pesky red bits hanging around on the left.

//Bit-masking// to the rescue.
{{{
>>> bits(21063 >> 5, width=16, chunk=16)
'0000001010010010'
>>> bits(0b111111, width=16, chunk=16)
'0000000000111111'
>>> bits((21063 >> 5) & 0b111111, width=16, chunk=16)
'0000000000010010'
}}}
We can create a mask for the bits we want to keep, {{{0b111111}}} in this case, and then, using the bit-wise and {{{&}}}, we can clear out (set to {{{0}}}) any bits that are not set to {{{1}}} in the mask. Easy peasy.

!!!!!Blue
Finally, for the blue, we do not need to shift, because the 5 blue bits already are on the far right side. We only need to mask, but this time with {{{0b11111}}} (one less bit set to {{{1}}}):
{{{
>>> bits(21063 & 0b11111, width=16, chunk=16)
'0000000000000111'
}}}

We end up with the following normalized values:
|Red|Green|Blue|
|0.3225806451612903|0.2857142857142857|0.22580645161290322|
{{{
>>> ((21063 >> 5) & 0b111111) / 63.0
0.2857142857142857
>>> (21063 & 0b11111) / 31.0
0.22580645161290322
}}}

So, the cool kids probably would not write {{{0b111111}}}, because they would want to show everyone that they know hexadecimal as well:
{{{
>>> hex(0b111111)
'0x3f'
>>> ((21063 >> 5) & 0x3f) / 63.0  # green
0.2857142857142857
}}}
And the really cool kids would go octal:
{{{
>>> oct(0x3f)
'077'
>>> ((21063 >> 5) & 077) / 63.0  # green
0.2857142857142857
}}}
Personally, that one does not align well with my legibility obsession. I know Python would understand it. Being a lowly human, I might think that {{{077}}} is the base-10 seventy-seven&hellip;

The dorky kid in me would do something like:
{{{
>>> int('25', 29)
63
>>> ((21063 >> 5) & int('25', 29)) / 63.0  # green
0.2857142857142857
}}}
Base-29, anyone?
{{{
| 2           | 5           |
| 2 * (29**1) | 5 * (29**0) |
| 2 *   29    | 5 *   1     |
       58     +      5        = 63
}}}

[img[Kermit at Stonehenge|inclusions-2010-fall/dmc-discovery-2010-10.jpg]]

All of that to get a point cloud of Kermit the Frog sitting on a zebra pi&ntilde;ata inside Stonehenge.

-----
!!!!Some final thoughts

Notice that if we look at the integer results of bit-wise operations, they are not necessarily intuitive:
{{{
>>> 37 & 49, 37 | 49, 37 ^ 49
(33, 53, 20)
>>> 37 << 4, 416 >> 3
(592, 52)
}}}
Best to think of the integers as containers, using something like {{{bin()}}} or {{{bits()}}} to visualize the individual bits stored in those containers.

-----

!!!!Resources
*http://wiki.python.org/moin/BitManipulation &mdash; This article bounces around a bit, but there is good information there.
*http://wiki.python.org/moin/BitwiseOperators &mdash; Quick overview of bit-manipulation in Python, including a description of twos-complement numbers (something I skipped above).
*http://pypi.python.org/pypi/bitarray/ &mdash; If you happen to be manipulating many bits, you might find this package useful.
*http://wiki.python.org/moin/BitArrays &mdash; If you want to roll your own bit arrays, this article shows how you might do it.
*http://graphics.stanford.edu/~seander/bithacks.html &mdash; This is written in/for C, and many are not necessarily relevant to Python, but might be of interest.
*//[[CODE, The Hidden Language of Computer Hardware and Software|http://www.charlespetzold.com/code/]]//, by Charles Petzold &mdash; Answers the question: What do flashlights, the British invasion, black cats, and seesaws have to do with computers? (from the back cover). A lovely book that dives down into the 1s and 0s.
*//[[Every Icon|http:www.numeral.com/eicon.html]]// by John F. Simon, Jr.

Useful [[built-in functions|http://docs.python.org/library/functions.html]] &mdash; {{{bin()}}}, {{{chr()}}}, {{{hex()}}}, {{{int()}}}, {{{oct()}}} and {{{ord()}}}

Some related modules from the Python Standard Library:
*[[array|http://docs.python.org/library/array.html]] and at [[PyMOTW|http://www.doughellmann.com/PyMOTW/array/]]
*[[binascii|http://docs.python.org/library/binascii.html]]
*[[struct|http://docs.python.org/library/struct.html]] and at [[PyMOTW|http://www.doughellmann.com/PyMOTW/struct/]]
~PyMOTW = //[[Python Module of the Week|http://www.doughellmann.com/PyMOTW/]]// by Doug Hellmann.

-----
!!!!~TODOs
*Wikipedia articles for logical operators
*references for floating point in binary

-----

TODO

!!!!A practical example &mdash; Flags for os.open()
{{{
# requires the bits() function from above
def flags_in_os_module():
    import os
    from pprint import pprint
    results = []
    for item in dir(os):
        if item.startswith('O_'):
            int_value = eval('os.%s' % item)
            results.append((bits(int_value), int_value, 'os.%s' % item))
    results.sort()
    pprint(results)
}}}
Some interesting/useful modules and packages that I have come across in my explorations. (See also the //[[Python Package Index (PyPI)|http://pypi.python.org/pypi]]// which contains a list of thousands of available packages.)

//See also// [[Open source projects of interest]]
!!!!Standard library modules
[TODO: //module list forthcoming...//]
!!!!Third-party packages and modules
*//[[HEALPix|http://healpix.jpl.nasa.gov/]]// and //[[healpy|http://code.google.com/p/healpy/]]// (Python bindings for ~HEALPix) &mdash; ''H''ierarchical ''E''qual ''A''rea iso''L''atitude ''Pix''elization of a sphere. We can&rsquo;t let the astronomers have all of the fun with this one.
*//[[iPython|http://ipython.scipy.org/moin/]]// &mdash; an enhanced Python interpreter and a framework for parallel computing.
*//[[Kivy|http://kivy.org/]]// &mdash; A library for rapid development of nature user interface (e.g., touch) applications.
*[[Phidgets|http://www.phidgets.com/]] &mdash; USB hardware to interface with the physical world (sensors, motors, ~LEDs, etc.).
*//[[PyBrain|http://pybrain.org/]]// &mdash; A [[machine learning|http://en.wikipedia.org/wiki/Machine_learning]] library for Python.
*//[[PyCUDA|http://mathema.tician.de/software/pycuda]]// &mdash; A library which gives access to the [[CUDA|http://www.nvidia.com/object/cuda_home_new.html]] parallel processing development platform.
*//[[PyEphem|http://rhodesmill.org/pyephem/]]// &mdash; Scientific-grade astronomical computations.
*//[[RPyC|http://rpyc.wikidot.com/]]// &mdash;  Library for remote procedure calls, clustering and distributed-computing; Used as the basis for remote access/control in //Houdini// (versions 11+).
*//[[SciPy|http://www.scipy.org/]]// and //[[NumPy|http://numpy.scipy.org/]]// &mdash; For mathematics, science, and engineering. //~NumPy// is utilized by a number of other packages for its efficiency in dealing with large arrays of data.
[TODO: This note is under development.]

The term, //remote procedure call//, is used to describe a general type of communication that happens between two distinct applications (i.e., applications running as separate processes, possibly two different applications, possibly two instances of the same application). This note will focus on remote procedure call (RPC) mechanisms available through Python for //Maya// and //Houdini//.

This information is current as of //Maya 2011// and //Houdini 11//. The version of Python that you use does not necessarily need to match the version in the host application (//Maya// or //Houdini//). Experiment to see what works.

An important aspect of each of the mechanisms described here is that the host application and the script making the remote procedure calls do not need to be running on the same computer or under the same operating system.

!!!!RPC for //Houdini//
To quote a //Houdini// [[documentation page|http://localhost:48626/hom/rpc]]: &ldquo;You can control Houdini remotely by accessing HOM through an RPC (remote procedure call) interface over the network. Houdini provides a built-in RPC module, so you can control a remote copy of Houdini from a Python script.&rdquo;

That same [[documentation page|http://localhost:48626/hom/rpc]] has a simple example, but here is one that is slightly more elaborate:

In a Python Shell in //Houdini//, enter the following:
{{{
import hrpyc
hrpyc.start_server()
}}}
You should see something similar to this:
{{{
>>> import hrpyc
>>> hrpyc.start_server()
<Thread(Thread-2, started daemon 4962390016)>
>>> 
}}}

At this point, a //hrpyc// server is now running as a separate thread inside the //Houdini// process and is available to receive messages.

For the remote half of the communication, a small bit of setup needs to take place. The remote process will need access to the {{{hrpyc}}} module, along with {{{rpyc}}}. An easy way to accomplish this is to add {{{$HFS/python/lib/python2.6/site-packages}}} and {{{$HFS/houdini/python2.6libs}}} to the {{{PYTHONPATH}}} environment variable in your shell start-up script (e.g., {{{.bashrc}}}, {{{.profile}}}, or {{{bash_custom}}}). If you would like to do this entirely with your Python script, do the following:
{{{
import os, sys
sys.path.append(os.environ['HFS'] + '/python/lib/python2.6/site-packages')
sys.path.append(os.environ['HFS'] + '/houdini/python2.6libs')
}}}

If your remote process will be running on a computer on which //Houdini// is not installed, you will need to copy the {{{hrpyc.py}}} file and the {{{rpyc}}} directory from the locations above.

There are equivalent paths and versions for Python 2.5 as well (substitute {{{2.5}}} for {{{2.6}}} in all of the above code). Both {{{hrpyc}}} file and the {{{rpyc}}} are written purely in Python, so, technically, you also should be able to use Python 2.7 for your remote process.

The downside to this wholesale addition of the Python modules that ship with //Houdini// is that some of those modules //may// be in conflict with other modules in your {{{PYTHONPATH}}}, especially if you are relying upon a different version of a given module. Through testing is appropriate.

Now, fire up a Python interpreter...

[TODO: More to come...]

The {{{hrpyc}}} module is based on an open-source project, //~RPyC//. //~RPyC// ([[link|http://rpyc.wikidot.com/]]) is an elegant system, written in pure Python. The package is fully-documented on its site.

!!!!RPC for //Maya//
With //Maya//, remote control and interaction takes the form of a communication channel over which MEL or Python commands can be sent and response messages can be received.

!!!!~RPyC for //Maya//
It seems technically possible that //~RPyC// could be used with Maya to create an interface similar to {{{hrpyc}}} in //Houdini//. There are issues with threading in Maya that may cause difficulty.

[TODO: Experiment with //~RPyC//, //~PyMEL// and //Maya//.]
!!!!Open a URL in a web browser from the Python Standard Library
The Python Standard Library includes a [[webbrowser|http://docs.python.org/library/webbrowser]] module.
{{{
import webbrowser
webbrowser.open_new_tab('http://www.kennethahuff.com')
}}}

!!!!Printing text in various colors in a terminal
Let&rsquo;s say you want to have your Python script print its text in color...[[Here is a page that lists the ANSI escape codes|http://ascii-table.com/ansi-escape-sequences.php]] that, among other things, can change the foreground and background colors of terminals.

For example, {{{print('\x1b[1m\x1b[31mHello\x1b[0m')}}} will print the word &ldquo;Hello&rdquo; in exciting red. That {{{\x1b[0m}}} at the end is important&nbsp;&mdash;&nbsp;it resets the terminal to the default colors. The {{{\x1b}}} represents the ASCII Escape character (ASCII code 27).

Part of me does not believe that I am writing this...I fully expect to be cursed with [[Technicolor|http://en.wikipedia.org/wiki/Technicolor]] //hello.py// scripts from now on...
Technical notes and resources
Kenneth A. Huff
http://www.kennethahuff.com/teaching/index.html
//[[Brain kibble|http://www.kennethahuff.com/blog/category/brain-kibble/]]//
//[[Looking and Seeing]]//

[[Bubble notes]]
[[Maya dual-monitor fix]]
[[Of interest at PyCon 2011]] &mdash; Some notes on things that interested me at [[PyCon 2011|http://us.pycon.org/2011/]].
!!!!Collections of computer graphics papers and other resources
*~Ke-Sen Huang  and [[Tim Rowley|http://trowley.org/]] have created [[indicies of computer graphics papers published since approximately 2000|http://kesen.realtimerendering.com/]]. Papers are listed first by conference/publication. Diving down a level, there are links to authors&rsquo; sites, where often you can find a free version of the paper for download. Some of the indices have been removed, but these still are great jump off points.
*[[Pixar Online Library|http://graphics.pixar.com/library/]]
*Craig Reynolds (of //[[Boids|http://www.red3d.com/cwr/boids/]]// fame) has complied [[an extensive list of papers and resources for non-photorealistic rendering (NPR).|http://www.red3d.com/cwr/npr/]] Many of the links are dead, but a quick //Google// search should turn up a given resource.
*[[Paul Bourke|http://paulbourke.net/]] over the years has built up a site that is a vast resource for computer graphics related topics. Whether you want to find [[the intersection of a line and a circle (or sphere)|http://local.wasp.uwa.edu.au/~pbourke/geometry/sphereline/]] or the [[Cissoid of Diocles|http://local.wasp.uwa.edu.au/~pbourke/geometry/cissoiddiocles/]], there are all sorts of goodies, often with source code, on the site.
*Malcolm Kesson&rsquo;s [[fundza.com|http://www.fundza.com/]] is another fantastic resource for code examples, with an emphasis on ~RenderMan.
*[[Famous Curves Index|http://www-history.mcs.st-and.ac.uk/Curves/Curves.html]] &mdash; The stories and formulas for some well-know curves.

If you are interested in music and/or sound visualization, you might enjoy the [[Create Digital Motion|http://createdigitalmotion.com/]] and [[Create Digital Music|http://createdigitalmusic.com/]] blogs. Lots of good stuff.

!!!!Everything else
*[[bash_custom]] &mdash; Information on customizing the Linux environment at SCAD
*[[Houdini: Links, notes and resources|Houdini: Links]] &mdash; A grabbag of links, resources and notes related to ~SideEffects Software&rsquo;s //Houdini//
*[[Look development: Links, notes and resources|Look development: Notes]] &mdash; Lighting, shading and rendering
*[[Linux: Links, notes and resources|Linux: Notes]] &mdash; //Linux// and other //*nix// (including Mac OS X) operating systems (command line stuff)
*[[Maya: Links, notes and resources|Maya: Links]] &mdash; Links to resources for Autodesk //Maya//
*[[Photography notes|Photography: Notes]]
*[[Proceduralism: Links, notes and resources|Proceduralism: Notes]] &mdash; A developing cornucopia of references for procedural techniques.
*[[Python notes]] &mdash; Some suggestions for learning and using Python, especially in the context of computer graphics.
*[[Open source projects of interest]]
*[[Stereoscopic imaging: Links, notes and resources|Stereoscopic: Links]]
*[[Text editors]] &mdash; Some general information regarding text editors for scripting
**[[jEdit: Set-up at SCAD]] &mdash; Instructions for setting up jEdit under Linux at SCAD (contains some general suggestions as well)
*[[Video screen capture]]
*[[Visual resources]] &mdash; A collection of links to sites with deep and/or broad visual references. 
''{{kManicule{&#9758;}}} Prepare a resized JPEG version of the anaglyph.''

Working from your cropped master image, you will be resizing the image to fit with 1,920 pixels width and 1,080 height (a horizontal image).

Use the  Image menu > Image Size&hellip; command to resize the image.

Before changing the numbers in the dialog box, confirm that the //Constrain Proportions// check box is turned on.

Now change the //Pixel Dimensions// values for //Width// and //Height// to that the //Width// is less than or equal to 1920 //and// the //Height// is less than or equal to 1080.

If the proportions of your image happen to be exactly 16:9, your image size should work out to be exactly 1920x1080. If this is the case, you can skip the next step.

[img[Image Size|inclusions-stereo/090_ImageSize.jpg]]

If your image size did not end up being exactly 1920x1080, there is one more step.

Use the Image menu > Canvas Size&hellip; command to change the overall size of the image. This command changes the number of pixels without scaling.

[img[Canvas Size|inclusions-stereo/091_CanvasSize.png]]

In my case, I would change the width 1627 to a value of 1920.

Finally, turn on the &rdquo;Black background&ldquo; layer in the Layers palette.

You are now ready to save the image.

Select File menu > Save As&hellip; Change the file format to JPEG. This automatically will save a copy of the image.

I would like everyone to use the following naming convention: {{{LastnameFirstname_000.jpg}}}, where you would substitute your name and a sequential number for {{{000}}}.

For example, my first anaglyph would be called

{{{HuffKen_001.jpg}}} 

Once you click on the Save button, you will be presented with a second dialog box. Please use these settings:

[img[Canvas Size|inclusions-stereo/092_JPEGSettings.png]]

''{{kManicule{&#9758;}}} Place a copy of your image(s) in a shared Dropbox.com directory.''

You should have a [[dropbox.com|http://www.dropbox.com]] account.

On your computer, in the directory that Dropbox is synchronizing, create a new directory with this naming convention:

{{{Stereo_LastnameFirstname}}}

So my directory would be {{{Stereo_HuffKen}}}

You should share this directory with me. [[Here are instructions on how to share a directory.|https://www.dropbox.com/help/19]] When sharing, please use {{{ken@kennethahuff.com}}}

You should only need to set this up once.

Now you can copy your images into the shared folder and I should be able to grab them.

''Deadline''

Please have any images that you would like reviewed in place in this folder by 8 p.m. on the Sunday evening before class.
[[Here are some additional resources.|Stereoscopic: Links]]

!!!!Tutorials
[[Here are some notes on how I make anaglyphic stereoscopic images|How I make an anaglyphic stereoscopic image]].

[[Here are notes on saving the images for review and sharing the images with me for class via Dropbox.com.|Stereoscopic photography class: Image submission guidelines]]

!!!!Naming convention and dropbox.com summary
Via Dropbox, share a directory with {{{ken@kennethahuff.com}}} with the following naming convention: {{{Stereo_LastnameFirstname}}}

Images should be ~JPEGs, 1920x1080 pixels (a horizontal image), with the following naming convention: {{{LastnameFirstname_000.jpg}}}, where {{{000}}} is a sequential number.

For more details, please see the tutorials referenced above.

!!!!Downloads
[[You can download a small set of sample stereographic photographs here.|http://dl.dropbox.com/u/7754637/SampleStereoscopicPairsJPEGs.zip]] These are meant for practice.

[[You can download the my Photoshop anaglyph templates here.|http://dl.dropbox.com/u/7754637/anaglyph_templates.zip]] The .zip file contains two .psd files. [[Instructions for using the templates.|How I make an anaglyphic stereoscopic image]]
!!!!Stereoscopic glasses
If you need to purchase stereoscopic glasses (just about any type and configuration), [[Rainbow Symphony|http://www.rainbowsymphony.com/]] is your source. They also will send you a free pair of glasses if you send them a self-addressed, stamped envelope.
!!!!Miscellaneous
*[[www.stereoscopic.org|http://www.stereoscopic.org/]] &mdash; Stereoscopic Displays and Applications conference site. The [[Virtual Library|http://www.stereoscopic.org/library/]] has downloadable versions of a number of key books on stereoscopic imaging.
*[[www.stereoscopy.com|http://www.stereoscopy.com/]] &mdash; A wide-ranging site for stereoscopic imaging.
*Sony prepared this [[timeline of stereoscopic cinema.|http://www.ny3d.org/2011/06/sony_creates_timeline_chart_ev.html]]
*Lenny Lipton&rsquo;s [[blog|http://lennylipton.wordpress.com/]]. Also, the site where you can download a PDF of his //[[Foundations of the Stereoscopic Cinema|http://www.stereoscopic2.org/library/foundation.cfm]]//.
!!!!Ken&rsquo;s notes
*[[How I make an anaglyphic stereoscopic image]]
*[[Overview notes for a stereoscopic photography workshop|Stereoscopic photography class: Overview]]
/*{{{*/
body {font-family: Georgia, "Times New Roman", Times, serif; font-size: 9.5pt; line-height: 14pt;}
a {border-bottom: 1px dotted #2e2e2e; text-decoration: none;}
.kManicule {font-size:160%; position:relative; top:0.2em;}
.kWarning {color: #880000; font-weight:bold; font-size: 9.5pt; padding-top: 6pt; padding-bottom: 12pt;}
hr { border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px;
	border-top-style: solid; border-right-style: none; border-bottom-style: none; border-left-style: none;
	border-top-color: #333333; border-right-color: #333333; border-bottom-color: #333333; border-left-color: #333333;
	width: 100%;
}
.viewer table, table.twtable {border-collapse:collapse; margin:0.0em 0.0em;}
.viewer pre {
	font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace;
	color: #000000;
	background-color: #ddd;
	font-size: 9.25pt;
	border: 1px dashed #999999;
	line-height: 10pt;
	/*padding: 4px;*/
	overflow: auto;
	width: 90%;
	white-space:pre;
}
.viewer code {
	font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace;
	color: #000000;
	background-color: #ddd;
	font-size: 9.25pt;
	white-space: nowrap;
}
.externalLink {text-decoration: none;}
.subtitle {font-size: 9.5pt; color: #555; padding-top: 6pt; padding-bottom: 12pt;}
.siteTitle {font-size: 16pt;}
.siteSubtitle {font-size: 9.5pt;}
h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none; margin-top:1.5em; margin-bottom:0.75em; border:none;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:1em;}
.headerShadow {position:relative; padding:2em 0 1em 1em; left:0px; top:0px;}
.headerForeground {position:absolute; padding:2em 0 1em 1em; left:0px; top:0px;}
#mainMenu {position:absolute; left:0; width:10em; text-align:left; padding:2.75em 0.5em 0.5em 1em;}
.tiddler {padding:1em 1em 2em; margin-bottom: 0.5em; border-bottom: 1px dotted #666;}
/*}}}*/
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

.kManicule {color:[[ColorPalette::SecondaryDark]]; background:transparent;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected{color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.sparkline {background:[[ColorPalette::PrimaryPale]]; border:0;}
.sparktick {background:[[ColorPalette::PrimaryDark]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:'alpha(opacity=60)';}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0px; top:0px;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:1.0em 0.8em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0px 3px 0px 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0px; padding-bottom:0px;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.sparkline {line-height:1em;}
.sparktick {outline:0;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
'' TECH 311: Digital Materials and Textures''

Jump to notes for class [[1|TECH 311: Class 1]], [[2|TECH 311: Class 2]], [[3|TECH 311: Class 3]], [[4|TECH 311: Class 4]], [[5|TECH 311: Class 5]], [[6|TECH 311: Class 6]], [[7|TECH 311: Class 7]], [[8|TECH 311: Class 8]], [[9|TECH 311: Class 9]], [[10|TECH 311: Class 10]], [[11|TECH 311: Class 11]], [[12|TECH 311: Class 12]], [[13|TECH 311: Class 13]], [[14|TECH 311: Class 14]], [[15|TECH 311: Class 15]], [[16|TECH 311: Class 16]], [[17|TECH 311: Class 17]], [[18|TECH 311: Class 18]], [[19|TECH 311: Class 19]], [[20|TECH 311: Class 20]]; [[Open all in new tab|index.html#%5B%5BTECH%20311%5D%5D%20%5B%5BTECH%20311%3A%20Class%201%5D%5D%20%5B%5BTECH%20311%3A%20Class%202%5D%5D%20%5B%5BTECH%20311%3A%20Class%203%5D%5D%20%5B%5BTECH%20311%3A%20Class%204%5D%5D%20%5B%5BTECH%20311%3A%20Class%205%5D%5D%20%5B%5BTECH%20311%3A%20Class%206%5D%5D%20%5B%5BTECH%20311%3A%20Class%207%5D%5D%20%5B%5BTECH%20311%3A%20Class%208%5D%5D%20%5B%5BTECH%20311%3A%20Class%209%5D%5D%20%5B%5BTECH%20311%3A%20Class%2010%5D%5D%20%5B%5BTECH%20311%3A%20Class%2011%5D%5D%20%5B%5BTECH%20311%3A%20Class%2012%5D%5D%20%5B%5BTECH%20311%3A%20Class%2013%5D%5D%20%5B%5BTECH%20311%3A%20Class%2014%5D%5D%20%5B%5BTECH%20311%3A%20Class%2015%5D%5D%20%5B%5BTECH%20311%3A%20Class%2016%5D%5D%20%5B%5BTECH%20311%3A%20Class%2017%5D%5D%20%5B%5BTECH%20311%3A%20Class%2018%5D%5D%20%5B%5BTECH%20311%3A%20Class%2019%5D%5D%20%5B%5BTECH%20311%3A%20Class%2020%5D%5D]]
!!!!Assignments
*[[Head shot]]
*[[Projects 1, 2 and 3|TECH 311: Projects]]
*[[Photographic scavenger hunt exercise|TECH 311: Scavenger hunt assignment]]
*[[Scratches exercise|TECH 311: Scratches assignment]]
!!!!Potential assignments
*[[Photography exercise|TECH 311: Photography assignment]]
*[[Magnolia leaf exercise|TECH 311: Magnolia leaf assignment]]

[[TECH 311: Preparing for the class]]

[[Maya resources|Maya: Links]]
[[Look development resources|Look development: Notes]]
[[Visual resources]]
Before class 2, you should have a [[head shot|Head shot]] in place in the drop box.

The specifications for [[Project 1 can be found here|TECH 311: Projects]].

Before class 2, you should have decided on at least one possibility for the subject of your first project.
''Reminder for next class:'' We will be meeting off-site. See [[Class 11 notes|TECH 311: Class 11]] for more information.

Information has been posted regarding an upcoming [[Extra help session]].

''Recommended reading:'' For those working on projects (now or in the future) that will require the use of high-resolution texture or the use of many file-based textures, I recommend that you read //Maya Documentation -> mental ray -> mental ray for Maya reference -> Functionality -> Texture Mapping//.

!!!!Project 2
By next class, you should have finished the following for [[project 2|TECH 311: Projects]]:
*Selection of model
*Gathering reference
*Creating concept art
You also should have started the texturing process.
!!!!Photography session
''Important:'' For this class period we will be meeting in Monterey Square, on Bull Street, between Gordon and Taylor Streets. [[Here is the location on Google Maps.|http://maps.google.com/?ie=UTF8&ll=32.071388,-81.094835&spn=0.002932,0.005681&z=18]] We will meet at 8 a.m. in the center of the square.

''Bring your camera.'' Be certain that your camera is fully charged and that you have sufficient space available on the camera for taking approximately an hour&rsquo;s worth of photographs.

If it is raining at 7:00 a.m. that morning, or if rain is forecasted for the class period, we will meet at Montgomery Hall in our regular classroom. In this case, you should bring your camera to class.

''Assignment:'' Here are the instructions for the [[photography exercise|TECH 311: Photography assignment]]. Your photos should be in the drop box by 6:00 p.m., Sunday, 13 February.

We will speak briefly about [[pin hole cameras|http://en.wikipedia.org/wiki/Pinhole_camera]] and [[camera obscura|http://en.wikipedia.org/wiki/Camera_obscura]]. Here are some [[recent photographs by Abelard Morell|http://minimalexposition.blogspot.com/2010/11/abelardo-morell-universe-next-door.html]] which were created with those techniques.
''Reminder:'' If I requested revisions on your first project, they are due in the drop box by 1:30 p.m., Thursday (tomorrow), 5 May. After that, the current grade will stand.

I told you &mdash; [[it is hard to hold a camera still.|http://www.engadget.com/2011/05/03/lasers-prove-you-cant-hold-a-camera-still-video/]]

All of the UV mapping tools and commands are documented in the //Maya Documentation > User Guide > Modeling > Mapping ~UVs.//

A sample texture placement grid (and the //Illustrator// file used to create it) have been added to _MATERIAL.

//Unfolding the Earth: Myriahedral Projections// by Jarke J. van Wijk is available [[here.|http://www.win.tue.nl/~vanwijk/myriahedral/]]
Today&rsquo;s class is a review and work session for [[Project 2|TECH 311: Projects]]. Remember that 11% of your grade depends on having review materials in place in the drop box by the start of class.

[[Maya: Toggling the update of render thumbnails]] (a note and MEL script). I also placed a copy of the script in _MATERIAL.
Project 2 is due at the start of class today.
There will be an [[extra help session|Extra help sessions]] this Saturday.
There will be an [[extra help session|Extra help sessions]] this Saturday.

All photographs for the [[photographic scavenger hunt exercise|Photographic scavenger hunt exercise]] are due in the drop box by next class.
Today&rsquo;s class is a review and work session for [[Project 3|TECH 311: Projects]]. Remember that 11% of your grade depends on having review materials in place in the drop box by the start of class.

All photographs for the [[photographic scavenger hunt exercise|Photographic scavenger hunt exercise]] are due in the drop box by today&rsquo;s class.
''Assignment:'' [[Photographic scavenger hunt exercise|TECH 311: Scavenger hunt assignment]]
!!!!Project 1
By class 3, you should:
*Finalize your [[project 1|TECH 311: Projects]] subject.
*Have a model ready.
*Have concept artwork in place and a preliminary reference .html file (the final version of the reference is due with the final project deadline).

You should be exploring the procedural texture nodes (all of the 2D and 3D texture nodes, with the exceptions of the //File//, //Movie//, and //PSD File// nodes).

If you would like your //Maya// preferences to follow you from workstation to workstation, under Windows //and// Linux, you should follow the instructions for setting up a [[bash_custom file|bash_custom]]. A copy of the file has been added to _MATERIAL.
Project 3 is due at the end of class today.
Here is the [[gigapixel eagle feather|http://www.gigamacro.com/gigapixel_macro_photography_gallery_eagle_feather.php]] reference I showed during our discussions last class.

Here is [[a link|http://mentalraytips.blogspot.com/2008/11/joy-of-little-ambience.html]] to the article on Zap Andersson&rsquo;s blog regarding ambience.

In connection with the //Layered Texture// node, there are notes on ''blend modes'' included on the //[[Look development resources|Look development: Links]]// page.

''Assignment:'' [[Scratches exercise|TECH 311: Scratches assignment]] &mdash; Should be completed by class 5.

''Assignment update:'' The theme has been added for the ninth week of the [[photography scavenger hunt assignment|TECH 311: Scavenger hunt assignment]].
The following has been added to _MATERIAL:
*{{{Scratches_001.ma}}} &mdash; The demonstration file from last class; uses Stucco and Marble textures to create a broken scratch pattern, ultimately used for displacement.
The [[scratches exercise|TECH 311: Scratches assignment]] should be completed by today&rsquo;s class. After today, if you make any changes to your submission, either on your own or at my request, you must send me an e-mail letting me know that I should reevaluate the exercise.

Any example files that I post will be //Maya 2011// Maya ASCII files. If you are using a previous version, you can tell //Maya// to ignore version differences when opening the file. File menu -> Open Scene... (option box) -> General Options -> Ignore Version checkbox.
!!!!Example files
The following examples have been added to _MATERIAL:
*{{{NPR_SurfaceLuminaceNode_003.ma}}} &mdash; The painterly hippopotami. It&rsquo;s just fun to type &ldquo;hippopotami&rdquo;.
*{{{PerpendicularProjectedRamps_Example.ma}}} (with a corresponding rendered image) &mdash; This file shows two Ramp textures, projected perpendicular to each other, as an example of using projected ramps as a masking/weight mapping mechanism in a shading network.
*{{{rampForLightFalloff_Sample.ma}}} &mdash; based on [[a workflow created by Joseph Francis.|http://www.digitalartform.com/archives/2005/08/hue_falloff_in.html]] The light rig incorporates a ramp control of the light color which varies based on angle, giving control over the color with the light falloff. See also [[this post|http://www.digitalartform.com/archives/2006/05/sophisticated_c.html]] and the Ramp Shader.
*{{{StipplingPattern_001.ma}}} &mdash; Uses a material nodes as textures to drive a stippled pattern (causing the pattern to be affected by surface shading and lighting).
*{{{ColorCorrection_Example.ma}}} &mdash; This is the scene I built in class using the Remap HSV node. There is a second material which contains the beginnings of a more involved network that potentially gives much greater control over hue, saturation and value (incorporating ~RGB-to-HSV and ~HSV-to-RGB nodes).
!!!!Examples
The following examples have been added to _MATERIAL:
*{{{Masking_SnowTexture_Example_001.ma}}} (with a corresponding rendered image) &mdash; Shows two simple examples of using the Snow texture as a mask based on direction of surface orientation.
*{{{Masking_Locator_Example_001.ma}}} (with a corresponding rendered image) &mdash; A shading network that varies color based on the position of a Locator relative to the surface.

Next class is a review and work session for [[Project 1|TECH 311: Projects]]. Remember that 11% of your grade depends on having review materials in place in the drop box by the start of class.
Today&rsquo;s class is a review and work session for [[Project 1|TECH 311: Projects]]. Remember that 11% of your project grade depends on having review materials in place in the drop box by the start of class.
[[Project 1|TECH 311: Projects]] is due at the beginning of this class period.

''Assignment:'' [[Project 2|TECH 311: Projects]]. You should have your subject selected by class 10.

Information has been posted regarding an upcoming [[Extra help session|Extra help sessions]].
For this exercise, you will create a simplified model of a magnolia leaf and then texture the leaf using only procedural nodes (i.e., no file-based textures).

During class you will be given a magnolia leaf. If you did not attend class, you should find your own magnolia leaf. There is no need to model any holes in the leaf. While you can deform the leaf (to create twists, for example), but it is suggested that you start with a relatively flat model, create a texture reference object and then deform the leaf (sculpting, lattice deformation or non-linear deformers work well for this).

You should work to have the exercise completed and submitted by class 6.

You will submit the exercise in a directory in the drop box titled, {{{LastnameFirstname_MagnoliaLeaf}}}. That directory should contain the following:
*{{{LastnameFirstname_MagnoliaLeaf.ma}}} &mdash; A //Maya// ASCII file of your completed scene.
*{{{LastnameFirstname_MagnoliaLeaf.[tif|png]}}} &mdash; a TIFF or PNG file containing a rendering of the leaf from a straight-on view. The long dimension of the rendering should be 1,000 pixels. The exact proportions are at your discretion. If submitting a TIFF, be sure to check the file in //Photoshop// to ensure that it is flattened to the background layer and that it contains only R, G and B channels (no alpha channels).

You will be turning in your original leaves during class 6.

Additional things you can for this exercise include:
*Integration with a photograph
*Create multiple leaves and variant materials
*Texture both sides of the leaf
*Additional, interesting renderings
After we meet in the park and you have spent some time taking photographs, select ''5 photos'' and place them in a directory, {{{LastnameFirstname_Ex02/}}}, in your drop box directory.

Each file should be renamed according to this convention: {{{LastnameFirstname_Ex02_#.jpeg}}}, where {{{#}}} is a number, 1&ndash;5. Example: {{{HuffKen_Ex02_1.jpeg}}}, {{{HuffKen_Ex02_2.jpeg}}}, etc.

If your camera was saving TIFF or RAW files, please convert the files to high-quality ~JPEGs. Do not resize the photographs.

''Deadline:'' You should have the photos in the drop box by 8:00 a.m., Tuesday, 3 May.

!!!!Grading criteria
*30% &mdash; based on whether these files are in place by the deadline (timeliness)
*10% &mdash; naming conventions followed (following instructions)
*60% &mdash; photographs in place, regardless of above deadline (participation)
This class is focused on the materials and textures portion of look development.

In preparation for the class, you should read over [[the specification for the projects|TECH 311: Projects]] and think of subjects for each of your projects.

Notice that you do not need to create models for each project. Models can be acquired from outside sources or from other projects on which you are currently working or have completed in the past. For any of the projects, you should spend no more than a day on modeling.

Clever folks have thought of ways to work on a single subject (e.g., a character) for the entire quarter.

If you would like to do some studying before the class, //[[Advanced Maya Texturing and Lighting, 2nd Edition|http://www.amazon.com/dp/0470292733/]]// by Lee Lanier is an excellent resource.
!!!Project 1 &mdash; Procedural texturing
The objective of this project is to become familiar with the Hypershade, shading networks and procedural textures in Maya.
!!!Project 2 &mdash; File-based texturing
With this project you will become familiar with file-based textures and related techniques.
!!!Project 3 &mdash; Advanced techniques
For the third project, you will be working with one or more of the following: subsurface scattering, fur and/or //mia_material_x//. Non-photorealistic rendering techniques also are appropriate for this project.

!!!Schedule
Project 1 will be due in the dropbox at the beginning of Class 9.
Project 2 will be due in the dropbox at the beginning of Class 15.
Project 3 will be due in the dropbox at the end of Class 20.

!!!Project descriptions
''Project 1 (procedural texturing):'' You are to create a piece that uses procedural textures that don&rsquo;t look as though they are &ldquo;procedural&rdquo;. You may use only 3D and 2D procedurals. No file-based textures are to be used. The materials should have a photorealistic quality. You will create a camera that moves slowly through the scene as well as 3 stills from a variety of camera distances and positions. Materials can be metal, plastic, fabric, natural or human-made, etc. Using a variety of materials will enhance the quality of this project.

''Project 2 (file-based texturing):'' The focus of this project is the creation and use of file-based textures. You will carry forward your procedural skills and should continue to incorporate those procedural techniques in this project, but the dominant force behind the look of your objects should be the use of file-based textures. These textures can created through digital painting (2D and/or 3D), photography, scanning and/or texture baking (or a combination of two or more of these techniques). 

''Project 3 (advanced techniques):'' Again carrying forward techniques from the previous two projects, this project will focus on the use of subsurface scattering materials, fur and/or other advanced materials.

!!!Reference
You will do research to find physical materials and visual elements which you will use as reference. Work from photographic or physical references.

You will submit 10&ndash;20 reference images with your project. These can photographs that you take; images found on the Internet; scans from books and magazines; and/or scans of sketches or concept artwork.

''Concept artwork:'' For each of the projects, at least one of your reference images will be concept artwork. The concept art can be a scanned sketch, a digital painting, a digital composite &mdash; something that demonstrates thought and planning went into the project before you started working in 3D.

The reference images should be resized to be no more than 1000 by 1000 pixels and saved as high-quality ~JPEGs. You will create an HTML file ({{{LastnameFirstname_P#_Reference.html}}}) that will list the images, along with brief captions explaining the relavance of the image and a citation. The HTML file should reference the images inside a folder ({{{LastnameFirstname_P#_Reference/}}}) inside your project folder. If the image was found on the Internet and an URL is available for the source, you should include a direct link to the source. The specific names of the image files is not critical as long as the images are properly sourced in the HTML {{{<img src="">}}} tags.

{{kManicule{&#9758;}}} [[Here is an example reference page.|inclusions-2010-fall/TECH311-SampleReferencePage.html]]

For your reference page, you should confirm that your //img// tag //src// attributes in the HTML code for the images look something like this:
{{{
<img src="LastnameFirstname_P1_Reference/some_image.jpg" width="1000" height="750" />
}}}
and not like this:
{{{
<img src="http://www.somesite.com/some_path/LastnameFirstname_P1_Reference/some_image.jpg" width="1000" height="750" />
}}}
All image references should be relative to the HTML file, not absolute paths to an particular server.

Double check your reference page on another computer, preferably another platform. If all of the images to do not show up on the HTML page, you have incorrect image references.

If you would like to include moving images as reference, please provide a URL to an online version. If the footage is not available online and  you need to submit a movie file, please contact me. I want to see it, we just need to work out how...

!!!Rendering criteria
''Animation:'' You must render a series of frames (at least 90&ndash;180 frames) where the camera moves ''slowly'' through your environment. //No crazy-fast camera moves!// Frames should be rendered at a resolution of 1280x720. Delivered animation to be no larger than 100 Mb. Frame rate of 24 or 30 frames-per-second.

Any submitted movies should be ~QuickTime .mov files. Movies should be compressed, but not at the expense of quality. No interlacing. No uncompressed movie files (e.g., ''Apple&rsquo;s Animation codec is not acceptable''). H264 compression is preferred. ''Your movie must be playable under Window, Mac OS X and Linux. Be sure to test.''

There is no need for motion blur (if your objects are moving fast enough that motion blur would be necessary, they are moving too fast for our purposes in this class). Shadows are not required, but are very strongly encouraged.

Include a 1-second title slate at the beginning of the movie with your name, &ldquo;TECH 311: Digital Materials&rdquo;, &ldquo;Quarter Year&rdquo;, &ldquo;Project Subject&rdquo;.

''Stills:'' Three rendered still images of your scene at various distances to the camera, rendered at a higher resolution of at least 1920x1080 for print. These should be unique renderings and not be single frames from the animation. The images should be artistically composed and should show off the work you have done on the project. Save the files as a RGB TIFF with LZW compression, PNG or Targa. ''No alpha channels.'' Check the pixel aspect ratios in Photoshop. Recent versions of Maya seem to be generating malformed images with odd pixel aspect ratios. Lossless compression is encouraged.

!!!File naming convention and directory structure
Use the following general naming convention:
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#.mov}}}
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Still1.ext}}}
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Still2.ext}}}
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Still3.ext}}}
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Credits.txt}}}
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Reference.html}}}
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Reference/[10-20 images as JPEGs]}}}

In the title of the project directory, {{{Subject}}} refers to a one- or two-word //specific// description of the subject matter of your project. Examples: {{{HuffKen_P1_Lily}}}, {{{DoeJane_P1_Lizard}}}

Deviation from these naming conventions may result in a grading penalty.

!!!Credits
{{{LastnameFirstname_P#_Credits.txt}}} is a plain text file in which you should document any outside sources for models, textures or any other elements of your project other than reference, which is documented elsewhere. You also should indicate if you are reworking a project done for a previous class or if this is a project you are sharing with another class. If you created everything in the project and from scratch for the project, simply state that fact.

!!!Material/Texture Documentation
There are different criteria for documenting file-based than for procedural techniques. A given project may combine documentation for file-based and procedural techniques.
!!!//Procedural techniques//
For project 1, in addition to the items detailed above, you also are to submit screen captures of your shading networks as shown in Maya&rsquo;s Hypershade. This also applies to any subsequent shading networks that are heavily procedural.

For each shading network: Graph the input connections to the root material node in the network. Maximize the Hypershade window to fill the screen. Turn off the create bar and show only the work area tab (the work area should fill the Hypershade window). Use the //Rearrange Graph// button to clean up the graph if it is a mess. Make a screen capture. Crop the screen capture in Photoshop so that only the shading network is showing (e.g., window title bar does not need to be visible). Save the file as a RGB TIFF with LZW compression, PNG or Targa. No alpha channels.

Please use the following naming convention for the files:
{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_Graph_DescriptiveNameOfTheMaterialGoesHere.tif}}}
Examples: {{{khuff/HuffKen_P1_Lily/HuffKen_P1_Graph_Stem.tiff}}}, {{{khuff/HuffKen_P1_Lily/HuffKen_P1_Graph_Flower.tiff}}}
If you have very complex networks, do not be concerned if you cannot read the individual node names. You will be asked to clarify complex networks, if necessary.

There is a sample file in the _MATERIAL directory of the drop box and this will be discussed in class.
!!!//Documentation for File-based techniques//
UV snapshots and texture painting(s) should be submitted. These should be submitted as flattened ~JPEGs, no larger than 2048 by 2048 pixels. You may work and render with larger files, but provide reduced versions for your submission. You should not be working with ~JPEGs as your production format, only for documentation purposes.

Use the following naming convention for the files:
*{{{YourDropBox/LastnameFirstname_P#_Subject/LastnameFirstname_P#_DescriptiveNameOfTheMaterialGoesHere_Attribute.jpeg}}}

Examples:
*{{{khuff/HuffKenneth_P1_StillLife/HuffKenneth_P1_TableTop_UV.jpeg}}}
*{{{khuff/HuffKenneth_P1_StillLife/HuffKenneth_P1_TableTop_Bump.jpeg}}}
*{{{khuff/HuffKenneth_P1_StillLife/HuffKenneth_P1_TableTop_Spec.jpeg}}}
*{{{khuff/HuffKenneth_P1_StillLife/HuffKenneth_P1_TableTop_DiffColor.jpeg}}}
You may abbreviate attribute names.

If you have more than three materials with file-based textures in your project, you may document the three most significant or complex.

!!!Review sessions and review submissions
In the class period prior to the deadline for a project, we will have a review session. //At the start of class// you are expected to have a still image and an animation, fully rendered and ready to present to me and/or the class.

The review materials should be in a directory, {{{YourDropBox/LastnameFirstname_P#_Subject_Review/}}}, and should follow the naming conventions above.

''This is mandatory.'' If you do not have these items in place by the start of class, 11% will be deducted from your project grade.

If you are not in class that day, you still are required to have the materials submitted by the start of the class period. I will be very hard-nosed and [[draconian|http://en.wikipedia.org/wiki/Draco_(lawgiver)]] about this policy. Leniency will be grant only for the most extra-ordinary of circumstances.

The still image and movie are the only items that should be in the review directory. Reference, story and credits information should be in the standard project directory. Do not remove the review directory from your drop box after the review session. It should remain in place for the entire quarter as part of your permanent documentation for the class.

I expect to see changes and improvements, based on feedback and additional work time, when I do the final review of your project for grading. Unless I specifically tell you otherwise, you should continue working on your project until the final submission. Very occasionally, someone will do something phenomenal and I will tell them to wrap up their project based on the review session. But you have to make a very good impression on me for that to happen, so do not count on it.

The review session for Project 1 will be Class 8.
The review session for Project 2 will be Class 14.
The review session for Project 3 will be Class 19.

!!!General tips for all of the projects
*Geometry should be clean (no inter-penetrations or faceting).
*Textures should not stretch (unless appropriate) and should be an appropriate scale for the objects on which they are used.
*Memory consumption and render times fall within an acceptable range. This means they render easily, but still look good. Optimize whenever possible.
*Sloppy work is not acceptable.
*Follow the naming conventions above.
*Check all of your submitted images for alpha channels &mdash; when you find them, remove them.
*Follow the naming conventions above. (Repeated on purpose.)

!!!!Maya scene files and/or project directories
You are required to submit a scene file or project directory //only// if I request it of you individually. Typically, I do this in order to be able to use your project as an example in the future. (For exercises, scene files may be required, but you should refer to the individual assignment specifications.)
As much as you need to develop technical skills for look development, you need to develop your eye for the physical traits of objects that you observe for reference.

For this quarter-long assignment, you will be photographically documenting various material traits and physical phenomena based on a weekly theme. For each theme, you will submit a minimum of ''five'' photographs.
!!!!Themes
|Week|Theme|
|1|Visual impressions of things that no longer exist; e.g., leaf imprint in a sidewalk; Visual palimpsests|
|2|Surfaces and materials affected by moisture; Wet versus dry|
|3|Cracked or broken edges|
|4|Glass: old, new, clean, dirty, colored, broken, etc.|
|5|Translucency and semi-transparency|
|6|Changes in color over time; e.g., sun-bleaching, rusting, tarnishing, etc.|
|7|Organic decay|
|8|Layers; e.g., layered paint, decals, etc.|
|9|Fresnel-like effects|

The assignment ends in week 9, with the final submission deadline for the entire exercise at the start of week 10.

Each file should be renamed according to this convention: {{{LastnameFirstname_Week#_#.jpeg}}}, where {{{#}}} is replaced by an appropriate number. Example: {{{HuffKen_Week2_1.jpeg}}}, {{{HuffKen_Week2_2.jpeg}}}, etc.

These photographs should be placed at the top level of your SFDM drop box for the class, in a directory titled, {{{LastnameFirstname_Scavenger/}}}.

If you are shooting TIFF or RAW files, convert the files to high-quality ~JPEGs. Do not resize the photographs.

''Deadlines:'' You should have the photos for a given theme in the drop box by the start of first class the week following the theme. You may work ahead if you like.
!!!!Grading criteria
*25% &mdash; based on whether these files are in place by the start of the first class the following week (timeliness)
*10% &mdash; naming conventions followed (following instructions)
*50% &mdash; photographs in place, regardless of above deadline (participation)
*15%+ &mdash;  submission of additional photographs (but no more then 10 per theme); photographs that are interesting, intriguing, well-executed, informative, illustrative, beautiful, etc. (my call).

This assignment will count as two exercises.

[[Here is the image that was shown in class and that served as the seed of inspiration for this assignment.|http://www.flickr.com/photos/phospho/5332453505/]]
For this exercise, you will create at least four unique shading networks based on the &ldquo;scratches&rdquo; technique demonstrated in class. You should take advantage of utility nodes and may use any procedural texture nodes available (you are not limited to marble and stucco).

You should focus on scratches, dents, dings, etc. &mdash; surface relief, either in the form of bump or displacement. For this exercise, surface relief is being used as a method of visualizing the patterns produced by the texture networks created. You may also elaborate on your example by addressing other material attributes with texture networks.

You should work to have the exercise completed and submitted by class 5.

You will submit the exercise in a directory in the drop box titled, {{{LastnameFirstname_Scratches/}}}. That directory should contain the following:
*{{{LastnameFirstname_Scratches.ma}}} &mdash; A //Maya// ASCII file of your completed scene.
*{{{LastnameFirstname_Scratches.[tif|png]}}} &mdash; a TIFF or PNG file containing a rendering of multiple copies of the same object, each copy with one of your materials applied, from a straight-on view. The long dimension of the rendering should be at least 1,000 pixels. The exact proportions are at your discretion. When preparing the submitted image, be sure to check the file in //Photoshop// to ensure that it is flattened to the background layer and that it contains only R, G and B channels (no alpha channels).

Suggestions for additional features and elements for this assignment:
*Create more than four shading networks
*Address additional material attributes other than surface relief in one or more materials (e.g., color, transparency, etc.)
''TECH 312: Advanced Application Scripting''

Jump to notes for class [[1|TECH 312: Class 1]], [[2|TECH 312: Class 2]], [[3|TECH 312: Class 3]], [[4|TECH 312: Class 4]], [[5|TECH 312: Class 5]], [[6|TECH 312: Class 6]], [[7|TECH 312: Class 7]], [[8|TECH 312: Class 8]], [[9|TECH 312: Class 9]], [[10|TECH 312: Class 10]], [[11|TECH 312: Class 11]], [[12|TECH 312: Class 12]], [[13|TECH 312: Class 13]], [[14|TECH 312: Class 14]], [[15|TECH 312: Class 15]], [[16|TECH 312: Class 16]], [[17|TECH 312: Class 17]], [[18|TECH 312: Class 18]], [[19|TECH 312: Class 19]], [[20|TECH 312: Class 20]]; [[Open all in new tab|index.html#%5B%5BTECH%20312%5D%5D%20%5B%5BTECH%20312%3A%20Class%201%5D%5D%20%5B%5BTECH%20312%3A%20Class%202%5D%5D%20%5B%5BTECH%20312%3A%20Class%203%5D%5D%20%5B%5BTECH%20312%3A%20Class%204%5D%5D%20%5B%5BTECH%20312%3A%20Class%205%5D%5D%20%5B%5BTECH%20312%3A%20Class%206%5D%5D%20%5B%5BTECH%20312%3A%20Class%207%5D%5D%20%5B%5BTECH%20312%3A%20Class%208%5D%5D%20%5B%5BTECH%20312%3A%20Class%209%5D%5D%20%5B%5BTECH%20312%3A%20Class%2010%5D%5D%20%5B%5BTECH%20312%3A%20Class%2011%5D%5D%20%5B%5BTECH%20312%3A%20Class%2012%5D%5D%20%5B%5BTECH%20312%3A%20Class%2013%5D%5D%20%5B%5BTECH%20312%3A%20Class%2014%5D%5D%20%5B%5BTECH%20312%3A%20Class%2015%5D%5D%20%5B%5BTECH%20312%3A%20Class%2016%5D%5D%20%5B%5BTECH%20312%3A%20Class%2017%5D%5D%20%5B%5BTECH%20312%3A%20Class%2018%5D%5D%20%5B%5BTECH%20312%3A%20Class%2019%5D%5D%20%5B%5BTECH%20312%3A%20Class%2020%5D%5D]]

!!!!Assignments
*[[Head shot]]
*[[Final project|TECH 312: Final project]]
*[[Sentence generator exercise|TECH 312: Sentence generator assignment]]
*[[Fit function exercise|TECH 312: Fit function assignment]]
*[[Randomize transforms module exercise|TECH 312: Randomize transforms assignment]]
*[[Light with ramp falloff exercise|TECH 312: Light with ramp falloff assignment]]
*[[Data parsing|TECH 312: Data parsing assignment]]
!!!!Potential assignments
*[[Ishtime exercise|TECH 312: Ishtime assignment]] &mdash; for Spring 2011, this assignment has been replaced with the [[sentence generator exercise|TECH 312: Sentence generator assignment]].
!!!!Deprecated assignments
*[[Dynamic camera rig exercise|TECH 312: Dynamic camera assignment]]

[[TECH 312: Preparing for the class]]

[[Python notes]]
[[Maya resources|Maya: Links]]
[[Houdini resources|Houdini: Links]]

{{kManicule{&#9758;}}} O&rsquo;Reilly Safari Books Online is now available through the SCAD Library. ([[link|http://0-proquest.safaribooksonline.com.library.scad.edu/]]) Many books on digital media and programming are available, including //[[Learning Python, Fourth Edition|http://oreilly.com/catalog/9780596158071/]]// by Mark Lutz, from which read recommendations will be made throughout the quarter.

{{kManicule{&#9758;}}} [[This link|http://0-proquest.safaribooksonline.com.library.scad.edu/book/programming/python/9780596805395]] should take you directly to //Learning Python//.

!!!!Exercises
To submit exercises, email --khuff@scad.edu-- with the script file ({{{*.mel}}}, {{{*.py}}}, etc.) file as an attachment. Include “TECH 312” and/or the procedure name and/or exercise title in the subject of the email.

Once an exercise has been approved, place a copy of the script file in your directory in the drop box, in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.

Remember, with MEL, the file name should match the procedure name. For the most part, for our purposes, each MEL procedure should be in its own {{{.mel}}} file.

All Python ({{{.py}}}) files should include the following header near the top of the file:
{{{
# TECH 312 Term Year
# Your Name
}}}

All MEL ({{{.mel}}}) files should include the following header near the top of the file:
{{{
// TECH 312 Term Year
// Your Name
}}}

The following apply to all of your script submissions:
*Use long command flags and long attribute names (e.g., {{{sphere -radius ...}}} , not {{{sphere -r ...}}}).
*Use comments in your code to document aspects that are not obvious.
*Include documentation comments at beginning of script that describe what the script does and give at least one usage example. This also is a good place to document any known bugs or limitations of the script. For Python modules, include both module and function docstrings.
*Use variable and procedure names that are intuitive and consistent.
*Use indentation and white space in a consistent, legible manner, following the example of scripts shown in class or provided in the _MATERIAL directory of the drop box.
!!!!Exercise grading
The grading for exercises is structured such that if you complete all exercises, meeting the minimum specifications, you will receive an 85%. If you make a reasonable attempt at all exercises, but take none of them to completion, you will receive a 70%. Missing exercises impose a penalty. Most, if not all, of the exercises can be expanded to include extra features. If you would like to earn a grade above 85%, go beyond the specifications, demonstrate exploration, make evident a deeper understanding of the principles and employ visual creativity. Suggestions will be made with each exercise for possible additional features. In the cases of some of the early, simpler exercises, the base grade will be 100%, upon completion. For those exercises, additional features will result in extra credit.
Before class 2, you should have a [[head shot|Head shot]] in place.

''Important:'' Before class 2, you should set up your development environment for the class. This includes
*Preparing a [[bash_custom]] file
*Setting up a [[text editor|Text editors]]
*Installing Python on your personal computer (version 2.7 preferred). [[Download here.|http://python.org/download/]]

''POST CLASS UPDATE (2:15 p.m.):'' I have confirmed the [[bash_custom]] and [[text editor|Text editors]] instructions referenced above. I also have placed a copy of a {{{bash_custom}}} file in _MATERIAL.

Read the [[final project specifications|TECH 312: Final project]]  and be prepared with some preliminary ideas for Class 2.
[[Here is a nice demonstration of a number of sorting algorithms|http://cg.scs.carleton.ca/~morin/misc/sortalg/]] which demonstrates their relative speeds. (Thank you, Zahari.)

A sample {{{userSetup.mel}}} has been added to _MATERIAL, along with a utility script, {{{toggle_renderThumbnailUpdate.mel}}}, which is called within {{{userSetup.mel}}}. As discussed in class, I recommend that you copy these scripts to the version-specific {{{scripts/}}} directory for your current version of //Maya//. For example, {{{.../maya/2011-x64/scripts/}}}. {{{toggle_renderThumbnailUpdate.mel}}} is [[described in detail here|Maya: Toggling the update of render thumbnails]].

//[[String Formatting Operations|http://docs.python.org/library/stdtypes.html#string-formatting-operations]]// in the Python documentation may be of use.

''Assignment:'' [[Light with ramp falloff exercise|TECH 312: Light with ramp falloff assignment]] &mdash; You should have this exercise completed by class 13.
!!!!Some generated sentences
*Since the ugly zebra painted on a glorious box, the fragrant hippo clapped. (Kersti)
*A fluffy crow will maim the bubbly crow. (Kristen)
*Has a fluffy cat ate a fluffy crow yet? (Kristen)
*Noooo! The girl buys a spoon. (Soumitra)
*A child forgot and ate the hand? (Steve)
*A Ken Huff that is often organized is seldom played. (Steve)
*The confused scared apple cautiously ran after a malnurished professor. (Luke)
''Assignment:'' [[Data parsing exercise|TECH 312: Data parsing assignment]] &mdash; Before next class, you should have found a data source for your exercise. If you are unsure of a particular source, feel free to email a link to me and I will give you some feedback.

[[Magnetic resonance images of veg and fruits.|http://insideinsides.blogspot.com/]]
Following on the intent of the [[data visualization assignment|TECH 312: Data parsing assignment]], the [[2010 International Science and Engineering Visualization Challenge|http://www.wired.com/wiredscience/2011/02/science-visualizations-gallery/]] recently announced its top entries. Two of my favorites: //[[GlyphSea|http://www.sdsc.edu/us/visservices/software/glyphsea/]]// and the [[Human Immunodeficiency Virus|http://visualscience.ru/en/illustrations/modelling/hiv/]] model. The same company that create the HIV visualization also has a stunning [[H1N1|http://visualscience.ru/en/illustrations/modelling/influenza-H1N1/]] model.

Python picked a peck of [[pickle|http://docs.python.org/library/pickle.html#]], providing [[pretty powerful persistence.|http://www.doughellmann.com/PyMOTW/pickle/index.html#module-pickle]]

The following has been added to _MATERIAL:
*{{{FileHandling/Maya-XML-Example/processGPSTrack_with_pickle.py}}} &mdash; Produces the same end result as the previous GPS examples, but splits its process into two distinct phases: 1) read the .gpx file, extract the trackpoints, and save the resulting list of tuples to a pickle file; 2) read the previously-saved pickle file and create a NURBS curve in //Maya//.

Try this in Python > 2.7:
{{{
>>> import antigravity
}}}

And...[[something else to bounce around in your noggin.|http://blog.vivekhaldar.com/post/3339907908/the-cognitive-style-of-unix]]
In {{{_MATERIAL/FileHandling/}}}:
*{{{geocodeCity.py}}} has been updated a bit. This script pulls geocoded data from Google based on a city name. See the file for references.
*{{{python_gis/}}} subdirectory has been added. Contains examples of custom Python ~SOPs for Houdini. See the {{{00_README_from_Ken.txt}}} in the directory for more information.
!!!!Recipe for file dialog boxes from the command line using Tkinter
Here is the code that I demonstrated which uses the [[Tkinter|http://docs.python.org/library/tkinter.html]] and tkFileDialog modules to present a file dialog from a command line script.
{{{
import Tkinter, tkFileDialog
root = Tkinter.Tk()
result_directory_path = tkFileDialog.askdirectory(parent=root, initialdir="/", title='Pick a directory')
root.withdraw()
}}}
Use the following to investigate the tkFileDialog module (specific documentation is lacking for this module):
{{{
>>> import tkFileDialog
>>> help(tkFileDialog)
}}}


Also...[[Famous Curves Index|http://www-history.mcs.st-and.ac.uk/Curves/Curves.html]] &mdash; The stories and formulas for some well-know curves (Thanks, Tom).
Here is a blog post about the [[design philosophy of Python|http://python-history.blogspot.com/2009/01/pythons-design-philosophy.html]] from the language&rsquo;s [[creator|http://www.python.org/~guido/]]. For that matter, [[the entire blog is dedicated to the history of Python|http://python-history.blogspot.com/]] and touches on a wide range of topics.
There will be an [[extra help session|Extra help sessions]] this Saturday.

I have added a note regarding [[interesting packages and modules for Python|Python: Interesting packages and modules]]. A work-in-progress and open to suggestions...
I have started to develop [[a note about remote procedure calls|Python: Remote procedure calls]], based on the brief demonstration last class.

!!!!A special location for Python scripts for Houdini
Shh! It&rsquo;s a secret. Not really. //Houdini// has a global expression variable, {{{$HIP}}}, which contains the path to the directory containing the current scene ({{{.hip}}}) file. If a {{{$HIP/scripts/python/}}} directory exists, it will be added automatically to the Python search path when a //.hip// file is opened. To confirm, enter the following in the Python Shell in //Houdini//:
{{{
import sys, pprint
pprint.pprint(sys.path)
}}}
You should see your path listed in the results.

{{{$HIP/scripts/python/}}} would be an appropriate place to put scripts that are specific to an entire project, but maybe not specific to any one of the //.hip// files. Or, maybe, a place to put third-party modules and packages that are specific to a particular project. Or, maybe, if you want to use some specific modules and/or packages on the renderfarm and you want them to be available to your //Houdini// scene.

This is part of a larger mechanism by which //Houdini// searches its paths for {{{pythonX.Ylibs/}}} (where {{{X.Y}}} is a Python version number) and {{{scripts/python/}}} directories, all of which //Houdini// automatically will add to {{{sys.path}}}.

!!!!A follow-up note on Maya UI code and Python raw strings
The {{{UserInterfaceExamples_Maya.py}}} file takes advantage of Python raw strings when passing command strings to user interface elements (see examples 5 and 6 in the file).

In Python //raw strings//, backslash characters ({{{\}}}) are not interpreted as escape characters. In the user interface examples, this allows us to rewrite this
{{{
... changeCommand='python("gui_example5_changeCommand(\\\"demoWindow_UI_Text1\\\")")' ...
}}}
as
{{{
... changeCommand=r'python("gui_example5_changeCommand(\"demoWindow_UI_Text1\")")' ...
(Changes are      ^ here                               ^ here      and      ^ here.)
}}}
Makes it as simple as it can be when you are calling one language within another language which is the first language. Or something like that. Whew.

!!!!Also ~UI-related...
Earlier in the quarter, I gave you {{{toggle_renderThumbnailUpdate.mel}}}. That script is described in more detail in the [[Maya: Toggling the update of render thumbnails]] note. It is an example of hacking the main Maya UI and also an example of what I believe is the only acceptable use of global variables (i.e., using built-in Maya global variables to access existing UI elements).
Of interest: [[Matt Ebb|http://mke3.net/]] has created a [[raytracer VOP SOP|http://mke3.net/weblog/raytracer-vopsop/]] and has posted a couple of videos: [[video one|http://vimeo.com/20700092]] and [[video two|http://vimeo.com/22438117]].

I added {{{progress_bar.py}}} which demonstrates a method for creating a text-based progress bar for command line scripts.
!!!!Optimization
I have added the {{{optimization/}}} directory to _MATERIAL. It contains a number of examples of optimization techniques for Python and Python in Maya. The scripts mostly are setup to run from the command line, but due to the Python version issue on the school workstations you may need to specify which version of Python you want to use to execute the files. For example:
{{{
python2.7 optimization_concatenation.py
}}}
instead of simply
{{{
./optimization_concatenation.py
}}}
!!!!892 ways to partition a 3x4 grid
[[This|http://www.dubberly.com/concept-maps/3x4grid.html]] would make a fascinating scripting study, either implementing the algorithms documented in their references or a more &ldquo;naive&rdquo;, brute-force approach. You wanted something to do over the break, right? ;-)

&ldquo;Off-topic&rdquo;, you say?!? Think of [[procedural city generation|http://blip.tv/file/569059]] and how you might produce valid patterns for the configuration of individual city blocks&hellip;
Last class, I showed Kevin Webster&rsquo;s //[[metacosm project|http://rabidpraxis.com/projects/metacosm_project/]]//. Kevin has posted a number of the generated videos on [[vimeo|http://vimeo.com/kevinwebster/videos/]] and has followed up the project with a new work-in-progress, //[[the metacosm project redux|http://rabidpraxis.com/projects/metacosm_project_redux/]]//.

If you are looking for some inspiration for final project ideas, the //[[information aesthetics|http://infosthetics.com/]]// blog might be an excellent resource. This blog will be a primary reference for the [[data-parsing/file-handling exercise|TECH 312: Data parsing assignment]], later in the quarter.

Here are some links to third-party packages that I mentioned in class:
*//[[PyEphem|http://rhodesmill.org/pyephem/]]// &mdash; Scientific-grade astronomical computations.
*//[[SciPy|http://www.scipy.org/]]// and //[[NumPy|http://numpy.scipy.org/]]// &mdash; For mathematics, science, and engineering.
*//[[iPython|http://ipython.scipy.org/moin/]]// &mdash; an enhanced Python interpreter and a framework for parallel computing.

''Recommended reading:'' //Learning Python//, chapters 1&ndash;3. {{kManicule{&#9758;}}} [[This link|http://0-proquest.safaribooksonline.com.library.scad.edu/book/programming/python/9780596805395]] should take you directly to //Learning Python//.
!!!!Shebang
Here is a better version of the &ldquo;shebang&rdquo; I showed in class. Please let me know if this does not work for you.
{{{
#!/usr/bin/env python2.7
}}}
Submissions for the final project are due by the end of class today.
''Recommended reading:'' //Learning Python//, chapters 4&ndash;9. {{kManicule{&#9758;}}} [[This link|http://0-proquest.safaribooksonline.com.library.scad.edu/book/programming/python/9780596805395]] should take you directly to //Learning Python//.
''Reminder:'' The [[project proposal|TECH 312: Final project]] is due at the start of [[class 6|TECH 312: Class 6]].

For class 5, you should have a manually-created demonstration of your proposed final project.

From the Python documentation, [[here is a list of built-in functions.|http://docs.python.org/library/functions.html]]

''Recommended reading:'' //Learning Python//, chapters 10, 11 and 13. {{kManicule{&#9758;}}} [[This link|http://0-proquest.safaribooksonline.com.library.scad.edu/book/programming/python/9780596805395]] should take you directly to //Learning Python//.
!!!!In-class text manipulation from last class
Here is a sample result from the in-class exercise:
{{{
$> ./letterGrid.py 
zebra arbez
ebraz zarbe
braze ezarb
razeb bezar
azebr rbeza
}}}
Your script should be an executable, standalone script that can be run from the command line as shown above.

''Assignment:'' Here is the preliminary specification for the [[sentence generator exercise|TECH 312: Sentence generator assignment]].
''Reminder:'' The [[project proposal|TECH 312: Final project]] is due at the start of [[class 6|TECH 312: Class 6]].

The letter grid in-class exercise from class 4 is not an assignment that will be submitted for grading. You should, nonetheless, get it working. You should be able to create both a for-in loop version and a while loop version. After today&rsquo;s class, you should be able to create a version that will accept input from the command line. You will be expected to demonstrate that functionality in class 6.

''Recommended reading:'' //Learning Python//, chapters 12, 14 and 21 (I know...I am jumping around a bit.). {{kManicule{&#9758;}}} [[This link|http://0-proquest.safaribooksonline.com.library.scad.edu/book/programming/python/9780596805395]] should take you directly to //Learning Python//.
!!!!Examples
The following were discussed in class and have been added to _MATERIAL in the drop box:
*{{{fibonacci.py}}} &mdash; A preliminary version of this module, showing some of the basic features of a formal module file.
*{{{optimization_*.py}}} &mdash; Some optimization examples utilizing the //[[cProfile|http://docs.python.org/library/profile.html]]// module to time the execution of code.
!!!!Documentation references
*Style conventions for docstrings: //[[Python Enhancement Proposal (PEP) 257|http://www.python.org/dev/peps/pep-0257/]]//
*General style guide for Python code: //[[PEP 8|http://www.python.org/dev/peps/pep-0008/]]//
*[[Tutorial for module set-up|http://docs.python.org/tutorial/modules.html]]
*[[random module|http://docs.python.org/library/random.html]]
*[[built-in functions|http://docs.python.org/library/functions.html]]
*[[doctest module documentation|http://docs.python.org/library/doctest.html]]
The [[project proposal|TECH 312: Final project]] is due at the start of today&rsquo;s class.

There is a small entry at the bottom of [[Python notes]] regarding colorizing the results of your {{{print()}}} function/statement. I&rsquo;m just sayin&rsquo;.

''Recommended reading:'' //Learning Python//, chapters 21 and 22. {{kManicule{&#9758;}}} [[This link|http://0-proquest.safaribooksonline.com.library.scad.edu/book/programming/python/9780596805395]] should take you directly to //Learning Python//.

''Please remember'' that all {{{.py}}} files should include the following header (right below the {{{#!/usr/bin/env python2.7}}} line if you are creating a standalone module):
{{{
# TECH 312 Term Year
# Your Name
}}}
You should have e-mailed a copy of your [[sentence generator exercise|TECH 312: Sentence generator assignment]] to me before the start of today&rsquo;s class.

From NPR, [[Program Creates Computer-Generated Sports Stories|http://www.npr.org/templates/story/story.php?storyId=122424166&ps=rs]] and [[Robot Journalist Out-Writes Human Sports Reporter|http://www.npr.org/2011/04/17/135471975/robot-journalist-out-writes-human-sports-reporter]]. Both are stories about [[Stats Monkey|http://infolab.northwestern.edu/projects/stats-monkey/]]. These class notes are not written by a robot.

''Assignment:'' [[Fit function exercise|TECH 312: Fit function assignment]]. You should work to have a version submitted by class 9.
The [[Google Python Class|http://code.google.com/edu/languages/google-python-class/]] assumes very little previous programming knowledge and is a good introduction to the language (includes videos, write-ups and exercises). The course originator and lecturer, [[Nick Parlante|http://www-cs-faculty.stanford.edu/~nick/]] also has created [[codingbat.com|http://codingbat.com/python]], a site with on-line exercises for Python (and Java). Both would be good if you would like another perspective on an introduction to the language.

On large projets or when developing software with a group of people, coding guidelines often are established. [[Here is a set of guidelines|http://bayes.colorado.edu/PythonGuidelines.html]] from a bioinformatics project at University of Colorado at Boulder. I present it not as set of rules for this class, but as an example of the kinds of standards and practices that can (should) be established with larger projects. Much of what is written on that page also is basic good-form Python technique.
You should have e-mailed a copy of your [[fit function exercise|TECH 312: Fit function assignment]] to me before the start of today&rsquo;s class.

The {{{randomizeTranslate*.*}}} scripts discussed in class are in _MATERIALS.

''Assignment:'' [[Randomize transforms module exercise|TECH 312: Randomize transforms assignment]] &mdash; You should plan to have this exercise submitted by class 10.

Information has been posted regarding an upcoming [[Extra help session|Extra help sessions]].

!!!!~PyMel and API/~OpenMaya methods
When using ~PyMel be cautious of using any node methods that contain something like &ldquo;Derived from ''api method'' maya.~OpenMaya...&rdquo; [my emphasis] in their descriptions. Here is [[an example|http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/classes/pymel.core.nodetypes/pymel.core.nodetypes.Mesh.html#pymel.core.nodetypes.Mesh.assignUV]] from the //Mesh// nodetype. These are methods were constructed out of the ~OpenMaya API and they seem only to work when used in the context of creating a plugin. I have started avoiding them all together (at least until I see good documentation or a good example that shows their use).
For this assignment, you will be using the string manipulation and file-handling capabilities of Python to generate a scene in //Maya// or //Houdini// based on data parsed from text or binary files. The source files likely will be from an on-line resource. The source also could be a direct retrieval of an RSS feed or HTML file.

Your tool should ask the user for a file via a file browser, open the file, read the data and create a representation or manifestation of the data in a //Maya// or //Houdini// scene. The interpretation of the data does not need to be literal, but there should be a correlation between variation in the data and variation in the virtual result. As an alternative to presenting a file browser, your tool could retrieve the data directly from an on-line source.

Some examples:
*An animation representing the average daily temperature in a particular location over the past //n// years
*A light rig that accurately orients a directional light to the position of the sun or moon at a particular location on the Earth at a particular time and date
*A model of the [[Fibrin molecule|http://www.pdb.org/pdb/static.do?p=education_discussion/molecule_of_the_month/pdb83_1.html]] which helps stop the bleeding of a cut

Some of the data sets with which you might work can be very large. You will want to extract a very small portion for testing purposes (maybe 50–200 lines of data), but your final system should be able to handle a complete data set from your chosen source.

Here are some suggested data sources and some points (or clusters) of inspiration. Do not limit yourself to this list.
*[[Protein Data Bank|http://www.pdb.org/]] &mdash; The data is available in multiple formats, but the [[PDB format|http://deposit.rcsb.org/adit/docs/pdb_atom_format.html]] is appropriate for our purposes. (If you prefer to work with XML-based data, use the [[PDBML format|http://pdbml.pdb.org/]].) It would be acceptable to implement only a portion of the format, e.g., the ATOM keyword. You should follow the [[CPK convention of color|http://en.wikipedia.org/wiki/CPK_coloring]] for the individual atoms and use a scale based on the [[van der Waals radii of atoms|http://en.wikipedia.org/wiki/Van_der_Waals_radius]].
*[[OpenStreetMap|http://www.openstreetmap.org/]] &mdash; Data from the ~OpenStreetMap project can be extract in [[OpenStreetMap XML Data format|http://wiki.openstreetmap.org/wiki/Data_Primitives]].
*[[Earthquakes|http://earthquake.usgs.gov/]] via the United State Geological Survey.
*[[National Climate Data Center|http://www.ncdc.noaa.gov/]] &mdash; Many possibilities here, from current data to historical records.
*[[Information Aesthetics|http://infosthetics.com/]] blog &mdash; Deep archives of information visualizations (massive cluster of inspiration and possible sources).
*The //[[Digital Universe Atlas|http://www.haydenplanetarium.org/universe/]]// from the Hayden Planetarium is a very accessible compilation of astronomical data. (After download, look in {{{data/extragalactic/specks/}}} and {{{data/milkyway/specks/}}} for easily-parsed data files.)
*Hans Rosling is the current godfather (my term) of open data and data visualization; his [[gapminder.org|http://www.gapminder.org/]] has been widely publicized and has links to videos many of Rosling&rsquo;s presentations. The hour-long //[[The Joy of Stats|http://www.gapminder.org/videos/the-joy-of-stats/]]// is a great place to start.

Here are some additional, excellent, deep sources for structured data:
*[[data.gov|http://www.data.gov]] &mdash; the U.S. Government&rsquo;s web site for open data.
*[[govtrack.us|http://www.govtrack.us/]] &mdash; &ldquo;A civic project to track Congress&rdquo; &mdash; I have not found raw data source on this site, but [[the concept is very interesting|http://www.govtrack.us/about.xpd]] (they apparently extract much of their data from the [[Library of Congress THOMAS site|http://thomas.loc.gov/]]).
*[[getthedata.org|http://getthedata.org/]] &mdash; a Q&A site regarding data sources, uses, formats, etc.
*[[publicdata.eu|http://publicdata.eu/]] &mdash; Not to be outdone by [[data.gov|http://www.data.gov]], here is a European Union site of open, public data (including international data from beyond E.U. borders).
*[[datamarket.com|http://datamarket.com/]] &mdash; A central clearinghouse for data sources; some of the sources cited are commercial (paid) resources.
*[[National Historical Geographic Information System|http://www.nhgis.org/]] &mdash; Offers aggregate census data and ~GIS-compatible shapefiles for the U.S., 1790&ndash;2000 (free, but registration required).
*[[Google Public Data Explorer|http://www.google.com/publicdata/home]] &mdash; Primarily a web-based data visualization site, but does have links to original data sources.

Finally, here are some interesting data visualization projects and/or resources:
*//[[Walking in Color Space|http://solaas.com.ar/node/25]]// by [[Leonardo Solaas|http://solaas.com.ar/bio]] &mdash; An abstracted exploration of the colors in images, on a per-pixel basis.
*//[[A History of the World in 100 Seconds|http://vimeo.com/19088241]]// by Gareth Lloyd and Tom Martin &mdash; An exploration of date and location for //Wikipedia// articles; see also [[this|http://www.ragtag.info/2011/feb/2/history-world-100-seconds/]] and [[this|http://www.ragtag.info/2011/feb/10/processing-every-wikipedia-article/]] blog post from one of the creators.
*[[Edward Tufte|http://www.edwardtufte.com/tufte/index]] &mdash; His books are available in the library; Thoughtful and beautiful.
*[[In case you get tired of being in front of a computer.|http://infosthetics.com/archives/2011/04/data_visualization_survival_kit_creating_visualizations_in_the_wild.html]]
* David Wicks has created [[a lovely visualization of water usage in the United States.|http://sansumbrella.com/works/2011/drawing-water/]]
* Visualizations tracking of the appearance of various characters in the //Avengers// comic book series: Parts [[one|http://blog.blprnt.com/blog/blprnt/avengers-assembled-and-visualized-part-1]] and [[two|http://blog.blprnt.com/blog/blprnt/avengers-assembled-and-visualized-part-2]].

!!!!Submission requirements
''Submit this exercise via the drop box.'' Send an email when you have a version in place that should be evaluated. //Do not send the submission files via email.//

Submit the exercise in a directory, {{{LastnameFirstname_DataParsing_}}}//{{{Subject}}}//{{{/}}}, where //{{{Subject}}}// briefly describes your data source. For example, {{{HuffKen_DataParsing_Weather/}}}.

The submission should include the following:
*A text file, {{{source.txt}}}, which contains the URL(s) for your data source and a brief description of the data. Also include links for any information that you used to decipher the data format.
*A Python file (module) containing your procedures. The exact name should be based on your data set and what your procedure does with the data. For example, {{{weatherVisualization.py}}}.
*A data file. The specific format and naming will depend on the source. You may need to submit more than one file, depending on the source and data type.
*A sample data file. This is an //optional// testing file that you should created if your data file is large. The naming and format will also depend on the source, but the name should have {{{_sample}}} appended after the name of the file but before the extension. For example, if the data file was called {{{weather.txt}}}, the sample file would be {{{weather_sample.txt}}}.
*A //Maya// ASCII scene file or //Houdini// .hip file showing the results of the use of your tool.
*If you use any //Python// modules or packages that are not part of the Python Standard Library, you should include them in your submission. Please ask me for the specific directory locations.
*A movie file, {{{LastnameFirstname_DataParsing_}}}//{{{Subject}}}//{{{_Demo.ext}}}, which demonstrates your tool/system. There is no time limit, but the presentation should be succinct. The movie file should be compressed, but not to the point of degrading the quality of the video. NO uncompressed movies! There is no specific requirement for codec or resolution, but the movie file should be playable on Mac OS X, Linux and Windows (test the playback of movies from your screen capture process on all three platforms before proceeding with final capture). The preferred, safe format is ~QuickTime H.264. You may use any platform for the screen capture process. If your process involves interaction with physical devices, you should include video footage that makes that clear to the viewer. ([[Here are some notes on video screen capture.|Video screen capture]]) ''Important:''&nbsp;You should test your screen capture and compression processes well ahead of the due date for the project.
!!!!Grading
Grades above middle-B will be based on complexity and aesthetics. This assignment will count as two exercises, i.e., its grading weight will be double that of previous exercises.
!!!!Houdini implementations
If you are creating a //Houdini//-based project, it is likely that you will be working with a //[[Python SOP|http://localhost:48626/hom/pythonsop]]//. Be sure to save this custom SOP in an external //operator type library// ({{{.otl}}}) file and to include the {{{.otl}}} file in your submission directory.

Houdini ships with a //[[Table Import SOP|http://localhost:48626/nodes/sop/tableimport]]//, which is a very nice example of a generalized tool for reading comma-separated value (CSV) text files. The node is implemented as a Python SOP, therefore you can examine the code via the Type Properties window. You //may not// use this SOP directly in your exercise.
!!!!Python resources
Here are some links to specific sections of the on-line language documentation for Python that will be useful for this assignment:
*[[Python Standard Library|http://docs.python.org/library/]] &mdash; Documentation of the library of modules that are part of the standard //Python// distribution.
*[[Built-in functions|http://docs.python.org/library/functions.html]]
*[[Sequence types (including strings, lists and tuples)|http://docs.python.org/library/stdtypes.html#sequence-types-str-unicode-list-tuple-buffer-xrange]]
*[[File objects|http://docs.python.org/library/stdtypes.html#file-objects]]
*[[string|http://docs.python.org/library/string.html]] &mdash; definition of the string object and its characteristics; for string methods/functions, see //[[String Methods|http://docs.python.org/library/stdtypes.html#string-methods]]//
*[[re|http://docs.python.org/library/re.html]] &mdash; regular expression operations
*[[math|http://docs.python.org/library/math.html]] &mdash; mathematical functions
*From the [[python.org HOWTOs|http://docs.python.org/dev/howto/]]: [[HOWTO Fetch Internet Resources Using The urllib Package|http://docs.python.org/dev/howto/urllib2.html]]
*[[pickle|http://docs.python.org/library/pickle.html#]] provides pretty powerful persistence
*[[Beautiful Soup|http://www.crummy.com/software/BeautifulSoup/]] &mdash; A Python HTML/XML parser
For this exercise, you will be creating a script that builds a dynamic camera rig around a user-selected camera. The rig is meant to smooth out jarring motion, very much inspired by the //[[Steadicam|http://en.wikipedia.org/wiki/Steadicam]]// concept. As part of the research for this project, you should review online video documentation of such physical rigs being used.

In _MATERIAL, you have been provided with a document, {{{SarmientoJulian-DynamicCameraSetupInMaya.pdf}}}, describing a workflow for building such a rig. This is an old tutorial, the original source of which is included in the file. There are aspects that are out-of-date. Additional notes are included at the top of the file. You may deviate from the tutorial workflow and final results, but the overall intent should remain the same.

Your procedure should be self-contained but you may assume that the //djRivet// script mentioned in the tutorial notes is available for use. The script is available [[here|http://www.djx.com.au/blog/2009/11/03/djrivet-mel-added-support-for-multi-uv-sets/]] and a copy is available in _MATERIAL. //Note that// djRivet //is a MEL script. Part of the research for this project is to determine how get your ~PyMel-based Python code to interact with a legacy MEL script.//

You should build the rig manually first and then develop the procedure to build it automatically.

You will be submitting both the script which constructs the rig and a //Maya// ASCII scene file which demonstrates the rig in action. When submitting the exercise for review via email, send both the script and the //Maya// ASCII file.

It is suggested that the rig have high level controls which modify its behavior. These controls can consist of custom attributes on a top-level node. Controls could include the &ldquo;springiness&rdquo; of the rig and how closely the rig follows the motion of the master camera to which it is attached.

User input should consist of selecting an animated camera and then invoking your procedure. The resulting camera rig should be constructed and configured entirely by your script.

!!!!Submission
Submit your file via e-mail as an attachment. The file name should be {{{dynamicCamera.py}}}. The //Maya// scene file name should be {{{dynamicCamera.ma}}}.

Once your exercise is approved, you may place a copy of this file in the dropbox in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.
There are two major components to the final project, the proposal and the implementation.

As reference, you will be shown examples of past projects.
!!!!Proposal contents
*Describe your idea/problem/tool/workflow.
*What is the inspiration for this project?
*What is your primary goal? Are there any secondary goals? Prioritize the features of your project.
*What major software applications will you use for the project (Maya, Houdini, Nuke, Shake, ~AfterEffects, etc., etc.)? Do you have a strong working knowledge of the major software applications you will use? If not, you must justify their use and provide a research timeline to convince me that you will know enough to effectively integrate the application in your project.
*Will you be using any languages other than Python (e.g., MEL, shell scripting, ~JavaScript, etc.)? Describe your experience with the languages you will be using.
*Will any particular data formats play a crucial role in your project (e.g., Maya ASCII, ~OpenEXR, Ptex, etc.)
* Are you going to be integrating any of the major sub-systems of for software applications (e.g., in Maya: nCloth, Paint Effects, Fluids, Particles, etc.)? If so, do you have experience with those sub-systems?
*What kind of user interface will your tool or workflow have? Command line? Graphical user interface? Give an example (e.g., an illustration of the GUI).
*Include visual references that demonstrate your project or that serve as reference for form, texture, material, look, lighting, effect, motion, etc.
*Are there similar solutions currently available? If so, provide a list and describe how you solution will differ.
*Any other important points that you need to make?
Think of the proposal as a formal specification of your project.
!!!!Proposal submission requirements
*2+ (2&ndash;4) pages of text, double-spaced, plus additional pages for illustrations and references. (Illustrations may be interspersed with text, but the total text, without illustrations, should be 2&ndash;4 pages.
*Submit as a PDF file, {{{LastnameFirstname_ProjectProposal.pdf}}}.
*All still reference images should be included directly in the PDF.
*For on-line references, include a URL and a description of the resource.
*If you need to include time-based footage that is not available via URL, please place the file(s) in a folder titled, {{{LastnameFirstname_ProjectReferences}}}, and make reference to those files in your proposal.
*All files should be submitted to your folder in the SFDM drop box.
''Proposal due at start of class 6.''

Grading will be based on the effective communication of your idea as well as grammar, spelling, punctuation, etc.

!!!!Final submission of project
A final draft of the submission of the project is due in the drop box at the //start// of class 19. We will look at materials, demonstrate projects and discuss final modifications during this class.

The final, revised submission for the project is due in the drop box by the //end// of class 20.

All submitted files should be in a directory titled, {{{LastnameFirstname_Project_}}}//{{{Description}}}//. Replace //{{{Description}}}// with a one- to two-word description of the project, e.g., {{{HuffKen_Project_ChaosParticles}}}. The directory should include the following:
*All necessary scripts and data files needed to work and test your system (unless we have discussed otherwise on an individual basis). The files should be organized in a clear and logical manner.
*If your tool/system generates geometry, a shading network, an animation rig or any other type of dependency graph or scene data, include a scene/application file (Maya ASCII, Houdini .hip, etc.) containing a sample result from the use of your tool. If your tool produces multiple rigs, include a separate sample file for each result or clearly isolate the separate examples in a single file (through layering in Maya or Geometry Containers in Houdini, for example). If your tool/system requires textures, or has other project-based dependencies, include an entire project directory; otherwise, simply include the application scene files. If including a project directory, remove any files and subdirectories from the project that are not required to demonstrate your tool/system/scenes.
*If your tool/system has external dependencies (e.g., the djRivet tool), include those files in a directory, {{{external/}}}. If you used ~PyMEL, you do not need to include it as an external reference (but you should indicate the ~PyMEL version in your instructions file).
*A text file, {{{instructions.txt}}}, which describes the files included in the submission and how to use them. Assume that the reader knows nothing of your project. Be sure to document any shortcomings or known problems with your system in this text file.
*A movie file which demonstrates your tool/system. There is no time limit, but the presentation should be succinct. The movie file should be compressed, but not to the point of degrading the quality of the video. NO uncompressed movies! There is no specific requirement for codec or resolution, but the movie file should be playable on Mac OS X, Linux and Windows (test the playback of movies from your screen capture process on all three platforms before proceeding with final capture). You may use any platform for the screen capture process. If your process involves interaction with physical devices, you should include video footage that makes that clear to the viewer. ([[Here are some notes on video screen capture.|Video screen capture]]) ''Important:'' You should test your screen capture and compression processes well ahead of the due date for the project.

!!!!Suggestions
*When prioritizing your work schedule on the project, keep in mind that a fancy GUI is less important than functional code. It is very easy to get lost in tweaking user interface code...//that button should be 1 pixel wider//, etc. Don&rsquo;t.
*Remember to [[write legible code.|Writing legible code]] Use meaningful variable names, include comments when appropriate and be consistent in structuring your code.
For this exercise, you will create a Python module that contains a function that maps a value proportionally from one range into another range. For example, if the initial value is 0.5, the original range is 0 to 1 and the new range is 3 to 6, the function would return the value of 4.5. You should model your function after the {{{fit()}}} ~HScript expression function in //Houdini// ([[link|http://www.sidefx.com/docs/houdini11.0/expressions/fit]]).

The //Houdini// {{{fit()}}} expression function clamps the result to the new range, that is, if the value would fall outside of the range, the value is cut off to either end of the range. This should be an optional argument to your function and should default to clamp the value. You may also wish to set default values for the new range to be 0 to 1, effectively normalizing the input.

Python has built-in functions for finding the [[minimum and maximum values of an iterable object (such as a list or a tuple).|http://docs.python.org/library/functions.html?highlight=min#max]] These //may// be use for clamping the result value from your function.

''Additional requirement:'' You should implement {{{doctest}}} testing in your {{{fit.py}}} module ([[doctest module documentation|http://docs.python.org/library/doctest.html]]).

There is no need to make the fit() function accessible from the command line. If you are interested in experimenting, though, you might find the //[[getopt|http://docs.python.org/library/getopt.html]]// or //[[argparse|http://docs.python.org/library/argparse.html]]// modules in the Python Standard Library of use.

Upon successful implementation of this function/module, the baseline grade for this exercise is 100% (in contrast to the standard 85% for most other exercises). Any additional features that you develop will result in extra credit.

!!!!Submission
Submit your file via e-mail as an attachment. The file name should be {{{fit.py}}}.

Once your exercise is approved, you may place a copy of this file in the dropbox in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.

!!!!Example usage
The {{{def}}} line for your function should read:
{{{
def fit(old_value, old_min, old_max, new_min=0, new_max=1.0, clamp=True):
}}}

|>|>|>|>|>|>|Sample results|
|Input|Old min|Old max|New min|New max|Clamp|Result|
|0.5|0|1|3|6|True|4.5|
|2|0|1|3|6|True|6.0|
|2|0|1|3|6|False|9.0|
|2|1|0|3|6|False|0.0|
|0.25|0|1|3|6|True|3.75|
|0.25|1|0|3|6|True|5.25|
|0|-1|1|0|4|True|2.0|

Here is some sample Python interpreter interaction (with some new examples, different from the table above) :
{{{
>>>import fit
>>> fit.fit(2, 0, 1, 3, 6, clamp=True)
6.0
>>> fit.fit(2, 0, 1, 3, 6, clamp=False)
9.0
>>> fit.fit(2, 1, 0, 3, 6, clamp=False)
0.0
>>> fit.fit(0.25, 1, 0, 3, 6)
5.25
>>> fit.fit(0.25, 0, 1, 3, 6)
3.75
>>> fit(2, 0, 3)
0.66666666666666663
>>> fit.fit(0, -1, 1, 0, 4)
2.0
>>> fit.fit(0.2,-3.5,6.5)
0.37
>>> fit(-3, 0, 4.0)
0.0
>>> fit(-3, 0, 4.0, clamp=False)
-0.75
}}}

!!!!Test script
A script, {{{fit_tests.py}}} has been placed in _MATERIAL. Copy this script to the same directory as your {{{fit.py}}} and run it like this
{{{
python fit_tests.py
}}}
If you are returned immediately to the command line, your {{{fit()}}} function is working properly. If you see failure messages, you have more work to do...

Because the tests are based on the //doctest// module, you can also do this
{{{
python fit_tests.py -v
}}}
to see the results of each test, both for passed and failed tests.

''Important:'' This script is provided for testing purposes. You should use it as described, but should not copy the tests into your own script. You should write your own doctests, thinking of the possible cases and how to suss out specific behavior.
Create a command line script, {{{ishtime.py}}}, which prints a message indicating the current time in the form shown in the following examples:
{{{
It's about one o'clock in the afternoon.
It's about noon.
It's about thirty-five minutes past nine in the morning.
It's twenty-five minutes to midnight.
}}}

The primary function should be defined as
{{{
def ishtime(hour, minute):
}}}
and should return a string containing the generated sentence. (You may add defaults to the arguments, as you see fit, but there should be just the two arguments as shown above.)

In addition to the primary function, you should include a test procedure, {{{test(count=10)}}}, which automatically tests your primary function by generating random times. The test should print results, for example
{{{
12:57 p.m. -- It's about one o'clock in the afternoon.
12:03 p.m. -- It's about noon.
 9:36 a.m. -- It's about thirty-five past nine in the morning.
11:35 p.m. -- It's about twenty-five minutes to midnight.
...
}}}
The numerical versions of the time should only be printed by the {{{test()}}} function.

[[Here is a text file|inclusions-2011-winter/ishtime_sample_output.txt]] containing //possible// results from all 1440 minutes of a day.

If your script is run from the command line like this
{{{
./ishtime.py
}}}
it should respond based on the current time.

If it is run like this
{{{
./ishtime.py -test
}}}
it should run and print a default of ten random tests.

If it is run like this
{{{
./ishtime.py -test 14
}}}
it should run and print fourteen random tests, etc.

Additional command line arguments and/or more sophisticated behavior of the script will result in additional credit.

You may find that defining some behind-the-scenes utility functions will be useful, but it certainly is not necessary. This script will incorporate almost every topic that we have covered to date.

You should document your script with docstrings and appropriate comments. Implementation of doctests is option, but should not be arbitrary if included. (The {{{test()}}} function should be your primary testing mechanism, but you may wish to use doctests to unit test any utility functions that you create.)

You should work to have a submission to me, via e-mail, before class 8 (next class).

Once your exercise is approved, you may place a copy of this file in the dropbox in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.

!!!!Standard reference for module behavior
In _MATERIAL, I have provided an {{{ishtime.pyc}}} for my version of the script. You may use this for comparison purposes.

To run from the command line, do any one of the following
{{{
python ishtime.pyc
python ishtime.pyc -test
python ishtime.pyc -test 15
}}}
From the Python interpreter, you can
{{{
>>> import ishtime
>>> ishtime.ishtime(3, 56)
"It's about five minutes 'til four in the morning."
>>> ishtime.test(3)
 4:20 a.m. -- It's twenty minutes past four in the morning.
 9:01 p.m. -- It's about nine o'clock at night.
 4:48 a.m. -- It's about ten minutes 'til five in the morning.
>>> 
}}}
Variation from the behavior of my script is allowed, but whatever you do should make sense.
!!!!Brainstorm groups
In class, you will be shuffled into groups of three or four. Below the standard header comment that should be in every Python file you create for the class, you should include a comment that indicates your brainstorm collaborators. For example,
{{{
# TECH 312 Winter 2011
# Ken Huff
# Brainstorming: Guido van Rossum, Harry Houdini and Mary Oliver
}}}

!!!!References
*//[[String Formatting Operations|http://docs.python.org/library/stdtypes.html#string-formatting-operations]]// in the Python documentation may be of use, especially in your {{{test()}}} function.
For this assignment you will be creating a Python/~PyMEL module for lighting tools that includes a function which creates a light rig.

A sample of the specific light rig you will be creating is located in {{{rampForLightFalloff_Sample.ma}}} in _MATERIAL and is based on [[a workflow created by Joseph Francis|http://www.digitalartform.com/archives/2005/08/hue_falloff_in.html]]. The light rig incorporates a ramp control of the light color which varies based on angle, giving control over the color with the light falloff.

The user will provide your function with two string arguments, one for the light to be modified and the other for the camera being rendered. The {{{def}}} line of your function should look like this:
{{{
def rampForLightFalloff(light, camera):
}}}

The configuration of the ramp should be modified (i.e., do not leave the default blue-green-red colors).

As a final step, your script should select an appropriate node, such as the ramp that was created or the light that the user specified.

After you have this first stage working, you should write an additional function:
{{{
def rampForLightFalloffBasedOnSelection():
}}}
This function should be based on the user selection of a camera and one or more lights in the scene. The function should confirm that the user selection is valid. It then should call the previous function in turn for each of the selected lights.

''Note:'' With this light rig, you may see one or both of the following error messages:
{{{
// Error: line 1: vectorProduct1: found a zero-length output vector. Result is unpredictable. // 
// Error: line 1: vectorProduct2: found a zero-length output vector. Result is unpredictable. // 
}}}
These occur when the light is directly on the origin (0, 0, 0). The error is expected behavior. The error messages are being generated by the ~VectorProduct nodes based on the scene configuration and are not a result of your script.

You should include appropriate comments, the standard assignment header and docstrings for the module and the function.
!!!!Testing
After testing your script with a single light and a single camera, you also should test with two lights and a single camera. You should end up with independent shading networks for the lights, each of which is connected to the transform node for the camera.

!!!!Submission
Submit your file via e-mail as an attachment. The file name should be {{{lightingTools.py}}}.

Once your exercise is approved, you may place a copy of this file in the dropbox in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.
!!!!Important changes made in Fall 2010
This course was restructured in Fall 2010, with a new course title, //Advanced Application Scripting.// Instead of emphasizing MEL in //Maya//, the course now focuses on Python as a scripting language. This will not be an introductory course. You will get the most from the class if you have some programming background. If you do not have programming experience, even familiarizing yourself with the basics of Python will allow you to focus your class time on furthering your knowledge. The [[Python notes]] page has numerous suggestions for getting started with Python.

The better prepared the group is and the more experience that individuals have, the further we will be able to explore.

-----

My goal for this course it to help students, regardless of their background or discipline, feel //justifiably confident// in their ability to write code that will help them accomplish useful tasks in their day-to-day work. Scripting should become a tool that you use on every project and something that you fluidly integrate into your workflow. This class in not about rewriting //Maya// or //Houdini// from scratch. It is about reducing repetitive tasks, increasing flexibility and solving everyday production problems.

!!!!Write code
The single most important thing that you can do to learn programming, is to write code.

Programming is not about rote memorization of facts. Programming is about problem solving.
Understanding will not come from reading a text or watching a tutorial.
Understanding will come with doing and with time.
Understanding will come with exploration and with experimentation.
Understanding will come with practice.

Write code.

!!!!Books
There are no required texts for the class, but there is text that is highly recommended as a general technical reference for //Maya//, David Gould’s //[[Complete Maya Programming (Volume 1).|http://www.davidgould.com/Books/CMP1/]]// This is an excellent resource, not only as an introduction to MEL, but also to Maya’s internal technical structure. The first two chapters cover the deeper underpinnings of Maya. A very long chapter 3 covers MEL and the remainder is about the C++ API. A very nice aspect of the recent integration of Python in Maya is that ~API-level functionality is now available through Python. Note, however, that this book predates and therefore //does not// cover Python.

Please see [[Python notes]] for text recommendations for Python.

!!!!Host applications
We will start by looking at Python on the command line and the ~PyMEL Python implementation in //Maya//. Depending on the overall background of the class group, we will work with //Houdini// and //Nuke// as well. Whether we do so as a group or not, individuals will be encouraged to work with the host application(s) with which they have the most experience and those germane to their discipline or specialization. For example, if you have never worked with //Houdini//, you will not be expected to complete your final project in //Houdini//.

The prerequisite requirement for the class only is that you have had an introductory class which covers //Maya//. I strongly suggest not taking the class, however, until you have had at least one year&rsquo;s experience with //Maya// or have used it extensively in more than one class. Our time together primarily should be spent learning how to create code, not learning the host applications. If you have questions or concerns, please contact me.

!!!!Maya documentation
You should read the following sections of the //Maya// online documentation:
*User’s Guide > General > MEL and Expressions
*User’s Guide > General > Python
You also should review the following sections. Look for commands or nodes that interest you. At the very least, read over the lists of commands and nodes to get a good idea of what is hiding under the hood.
*Technical Documentation > Commands
*Technical Documentation > ~CommandsPython
*Technical Documentation > Nodes
*Technical Documentation > ~PyMEL reference (//see below//)

!!!!MEL
While the focus of the course is Python, there is a vast historical stockpile of MEL out there in the world. It is important to be familiar with MEL as a language if you are a Maya user. Even with the integration of Python, MEL syntax still is used with expressions in Maya.

!!!!Python in //Maya// (and ~PyMEL)
We may be working with [[Python|http://www.python.org/]] in //Maya//. Specifically we will be using the ~PyMEL ([[link|http://code.google.com/p/pymel/]]) module to make Maya’s Python implementation more Python-like. You should become familiar with ~PyMEL, including installing it on your personal system. As of Maya 2011, ~PyMEL is integrated from the factory. If you are using an earlier version of //Maya//, I suggest installing and using the current 1.x release of ~PyMEL.

You also should take a long gander at the list of available Python packages in the [[Python Package Index.|http://pypi.python.org/pypi/]] Any of these could be integrated into your final project.

Here are [[some additional notes and recommendations regarding Python.|Python notes]]

!!!!Final project ideas
The final project is a major component of this class. You will be proposing your own project and should be thinking of ideas. The project could be the development of a new tool, the automation (partial or complete) of a workflow, integration of data from outside sources, etc. You will be shown examples of previous projects during the first few classes. Come up with more than one idea and start the research process for each idea. Look for resources, references and previous solutions. This research likely will help you to hone in on the idea that most interests you.

A preference is shown for original projects with a strong visual component. Systems-oriented projects, such as render wrangling tools, also are a possibility.

Python-based projects incorporating //Houdini// and //Nuke// also are encouraged.

!!!!Text editors
Please see [[this page for information regarding text editors.|Text editors]] Currently, it mainly describes setting up jEdit to work for MEL scripting. Almost every text editor recognizes Python code.

!!!!Shell scripting
The course is taught using Linux as the primary operating system. While the subject of the class is not shell scripting, you will need to be familiar with the basics of interacting with the system through a Terminal window.

Apple has posted [[a very good primer for shell scripting.|http://developer.apple.com/mac/library/DOCUMENTATION/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html]] It is written to be platform agnostic &mdash; the information applies to Linux, OS X and Cygwin (command line utilities for Windows).

The [[Special topics]] page also has some references and resources for command line stuff.
Using the [[PyMEL|http://code.google.com/p/pymel/]] version of the randomizeTranslate() function for //Maya//, create a module containing separate functions for randomizing scale, rotation and translation. Be sure to include appropriate comments, the standard assignment header and docstrings for the module and the separate functions. You also should assign appropriate default values to the arguments of each of your functions.

There is no need for doctests or a &ldquo;main&rdquo; function/block.

!!!!Submission
Submit your file via e-mail as an attachment. The file/module name should be {{{randomizeTransform.py}}}.

Once your exercise is approved, you may place a copy of this file in the dropbox in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.
''These are bits and pieces that may or may not appear in the official notes for the class.'' An internal holding ground for leftovers.

You should start reading the following sections of the Maya online documentation:
*//User&rsquo;s Guide > General > MEL//
*//User&rsquo;s Guide > General > Python//
*//Expressions and User&rsquo;s Guide > General > Python//

You should spend some time looking over the built-in scripts (including those that come with the //Maya Bonus Tools//), the built-in commands and built-in nodes.

''Assignment:'' [[Fit function exercise|TECH 312: Fit function assignment]] &mdash; You should work to submit this to me via e-mail before the next class. Depending on what we are able to cover in class 5, you may be making some changes to the script after next class.

You should have emailed a copy of your [[fit function exercise|TECH 312: Fit function assignment]] to me by today&rsquo;s class. (The specification for the exercise has been updated to include standalone/command line requirements and information about the grading standards for this assignment.)

While there is no requirement to make your fit() function accessible from the command line, if you are interested in experimenting, you might find the //[[getopt|http://docs.python.org/library/getopt.html]]// or //[[argparse|http://docs.python.org/library/argparse.html]]// modules in the Python Standard Library of use. (The //argparse// module would be my first choice.)

''Assignment:'' [[Ishtime exercise|TECH 312: Ishtime assignment]]. Do not start this assignment before class.

''Fit function exercise:'' Based on our discussion of the //doctest// module, you should modify your {{{fit.py}}} to incorporate appropriate doctests. You all also have received feedback/revision requests from me. You should implement those changes, in addition to the docstrings, and resubmit your files before next class.

The //Houdini// ~CHOPs/LED demonstration that I did in class utilized a controller from [[Phidgets|http://www.phidgets.com/]]. The Apple touchpad example was based on //[[Kivy|http://kivy.org/]]//.

I have posted {{{python_phidgets}}} to _MATERIAL. These are the example files from the //Houdini// ~CHOPs to Phidgets LED demonstration I did last class. [Apple touchpad example to follow...]
From NPR, [[Program Creates Computer-Generated Sports Stories|http://www.npr.org/templates/story/story.php?storyId=122424166&ps=rs]] and [[Robot Journalist Out-Writes Human Sports Reporter|http://www.npr.org/2011/04/17/135471975/robot-journalist-out-writes-human-sports-reporter]]. Both are stories about [[Stats Monkey|http://infolab.northwestern.edu/projects/stats-monkey/]].

And from //Radiolab//: &ldquo;[[Talking to Machines|http://www.radiolab.org/2011/may/31/]]&rdquo;.

Create a command line script, {{{sentence_generator.py}}}, which prints syntactically-correct sentences. Here are some examples:
{{{
A green, round woman took a round ball.
A round woman slowly liked a table on the table.
The flat, flat dog slowly tipped a flat dog.
A pig slowly saw a painting.
The table quickly bumped a flat, red dog.
The flat dog bumped a flat, flat woman.
The red painting eventually liked the blue, green dog.
A blue pig saw the man.
A painting eventually tipped a pig.
A dog quickly loathed the red man.
}}}

The sentences may be nonsense on a semantic level, but have proper sentence structure, capitalization and punctuation.

The primary function should be defined as
{{{
def generate_sentence():
}}}
and should return a string containing the generated sentence.

In addition to the primary function, you should include a test procedure, {{{test(count=10)}}}, which automatically tests your primary function by generating //count// sentences. The {{{test()}}} function should print its results, one sentence per line.

If your script is run from the command line like this
{{{
./sentence_generator.py
}}}
it should respond with a single sentence.

If it is run like this
{{{
./sentence_generator.py -test
}}}
it should run and print a default of ten sentences.

If it is run like this
{{{
./sentence_generator.py -test 14
}}}
it should run and print fourteen sentences, etc.

Additional command line arguments and/or more sophisticated behavior of the script will result in additional credit.

You will find that defining some behind-the-scenes utility functions will be useful. This script will incorporate almost every topic that we have covered to date.

You should document your script with docstrings and appropriate comments.

doctests would //not// be appropriate for this module, as the output varies every time the {{{generate_sentence()}}} function is called.

Once your exercise is approved, you may place a copy of this file in the dropbox in a subdirectory titled, {{{LastnameFirstname_Exercises/}}}.

!!!!Brainstorm groups
During one of the class sessions, you will be shuffled into groups of three or four. Specific instructions will be given then, but you should be working on your exercise and have code ready to show for that class. Below the standard header comment that should be in every Python file you create for the class, you should include a comment that indicates your brainstorm collaborators. For example,
{{{
# TECH 312 Winter 2011
# Ken Huff
# Brainstorming: Guido van Rossum, Harry Houdini and Mary Oliver
}}}

!!!!References
*//[[String Formatting Operations|http://docs.python.org/library/stdtypes.html#string-formatting-operations]]// in the Python documentation may be of use.
<<timeline "" 25>>
Here are some suggestions of text editors to use for script development and features to look for in an editor.

In my classes, you are welcome to use any text editor, but you should not be using Maya&rsquo;s built-in Script Editor or the code editors in Houdini as your primary editors. In addition to jEdit, Professor Kesson&rsquo;s //[[Cutter Text Editor|http://www.fundza.com/]]// works well. Another text editor available on for all platforms is //[[gedit|http://projects.gnome.org/gedit/]]//. 

Mac OS X users should consider ~BareBones Software&rsquo;s commerical text editor, //[[BBEdit|http://www.barebones.com/products/bbedit/]]//, or its free sibling, //[[TextWrangler|http://www.barebones.com/products/textwrangler/]]//. Coding Monkeys has created //[[SubEthaEdit|http://www.codingmonkeys.de/subethaedit/]]//, a very slick collabrative text editor. ~MEL-specific syntax modules are available for //~BBEdit/////~TextWrangler//, [[here|http://www.melscripting.com/]].
Whenever forced to use Windows, I find myself liking //[[Notepad++|http://notepad-plus-plus.org/]]//.

What? You say you don&rsquo;t want no stinkin&rsquo; GUI? There are many command line text editors: {{{vi}}}, {{{emacs}}}, {{{nano}}} and variants thereof. Search online or in the {{{man}}} pages for specifics.

!!!!Collaborative code editing
These are web-based and standalone application solutions that allow for realtime, collaborative editing of the same text.
*http://collabedit.com/ &mdash; Understands Python code; browser-based, with a very simple mechanism for sharing documents for collaboration.
*//[[SubEthaEdit|http://www.codingmonkeys.de/subethaedit/]]// &mdash; Mac OS X only; standalone application; very well done.
There are quite a few other collaborative text editors out there, with varying feature sets.

!!!!jEdit
During class, I will be using //[[jEdit|http://jedit.org/]]// as my primary text editor. It is a Java-based, platform-independent application. //jEdit// recognizes Python syntax automatically. A [[MEL-specific syntax module|http://www.creativecrash.com/maya/downloads/applications/syntax-scripting/c/jedit-mel-syntax-highlighting-mode]] is available for //jEdit// ([[generic installation instructions|http://www.creativecrash.com/maya/tutorials/using-tools-scripts/c/configuring-jedit-with-maya]]). This module provides some simple syntax highlighting and color coding of MEL. Instructions for setup of //jEdit// and the //Maya// module in the context of Montgomery Hall&rsquo;s Linux environment are below.

Instructions for [[setting up jEdit at SCAD can be found here.|jEdit: Set-up at SCAD]] They walk through installation of syntax highlighting and preparing preferences to follow the user from workstation to workstation.

!!!!What to look for in a text editor
Here is a list of features and/or settings you should look for in your text editor:
*Essentials
**Display of line numbers.
**Color-coded syntax highlighting for your chosen programming languages &mdash; Python syntax highlighting is available in all of the text editors I have mentioned; MEL syntax highlighting is available in some, typically as a third-party add-on.
**Automatic substitution of spaces for tabs &mdash; I always turn on this feature and use 4 spaces per tab; you should not mix tabs and spaces.
**Launching and opening files from the command line &mdash; I love my ~GUIs, but it is often much easier to open a file, or multiple files, from the command line.
*Nice if you can get &rsquo;em
**Bracket/Brace/Parenthesis balancing &mdash; Either simply highlighting of matching {{{( )}}} can be a good sanity keeper; some text editors will automatically type the matching parenthesis for you.
**Keyword completion &mdash; Start typing the word and the text editor will offer suggestions; depending on how it is implemented this ranges from very helpful (avoiding typographical errors) to annoying.
**Documentation/reference lookups.
**Display of invisible characters, such as tabs and space. This can be very helpful if you inadvertently have mixed tabs and spaces when indenting Python code.

!!!!Display typeface selection
When editing code, it is best to work with a monospaced typeface. Often, we need to see patterns in code. A missing or additional character can be a syntax error. A monospaced typeface causes characters to align in clearly-visible columns, making the comparison of two or more lines of code much simpler.
29 June 2012 &mdash; Updated [[Python notes]] with some new resources.

27 June 2012 &mdash; Added a note on [[bit-wise manipulations in Python|Python: Bit-wise manipulations]]

21 June 2012 &mdash; Added notes for //[[Looking and Seeing]]// (these are being updated on a weekly basis for the next couple of months)

7 June 2012 &mdash; In my [[Python notes]], I added some direct links to exercises.

29 May 2012 &mdash; I have fixed and relaunched [[my blog|http://www.kennethahuff.com/blog/]], on the main portion of my web site. Moved //[[Brain Kibble|http://www.kennethahuff.com/blog/category/brain-kibble/]]// to the blog, as well.

21 February 2012 &mdash; Updated [[photography notes|Photography: Notes]] for fifth session of a currently-running class.

15 December 2011 (from Hong Kong) &mdash; Notes related to presentations given at SIGGRAPH Asia 2011
*[[Stereoscopic imaging overview notes|Stereoscopic: Links]]
*[[Looking and Seeing Differently]]
''VSFX 350: Procedural Modeling and Animation''

Jump to notes for class [[1|VSFX 350: Class 1]], [[2|VSFX 350: Class 2]], [[3|VSFX 350: Class 3]], [[4|VSFX 350: Class 4]], [[5|VSFX 350: Class 5]], [[6|VSFX 350: Class 6]], [[7|VSFX 350: Class 7]], [[8|VSFX 350: Class 8]], [[9|VSFX 350: Class 9]], [[10|VSFX 350: Class 10]], [[11|VSFX 350: Class 11]], [[12|VSFX 350: Class 12]], [[13|VSFX 350: Class 13]], [[14|VSFX 350: Class 14]], [[15|VSFX 350: Class 15]], [[16|VSFX 350: Class 16]], [[17|VSFX 350: Class 17]], [[18|VSFX 350: Class 18]], [[19|VSFX 350: Class 19]], [[20|VSFX 350: Class 20]]; [[Open all in new tab|index.html#%5B%5BVSFX%20350%5D%5D%20%5B%5BVSFX%20350%3A%20Class%201%5D%5D%20%5B%5BVSFX%20350%3A%20Class%202%5D%5D%20%5B%5BVSFX%20350%3A%20Class%203%5D%5D%20%5B%5BVSFX%20350%3A%20Class%204%5D%5D%20%5B%5BVSFX%20350%3A%20Class%205%5D%5D%20%5B%5BVSFX%20350%3A%20Class%206%5D%5D%20%5B%5BVSFX%20350%3A%20Class%207%5D%5D%20%5B%5BVSFX%20350%3A%20Class%208%5D%5D%20%5B%5BVSFX%20350%3A%20Class%209%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2010%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2011%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2012%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2013%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2014%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2015%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2016%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2017%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2018%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2019%5D%5D%20%5B%5BVSFX%20350%3A%20Class%2020%5D%5D]]

[[VSFX 350: Preparing for the class]]
!!!!Assignments
*[[Head shot]]
*[[Procedural building (Project 1)|VSFX 350: Procedural building project]]
*[[Procedural forest exercise|VSFX 350: Procedural forest assignment]]
*[[Procedural building as Houdini Digital Asset|VSFX 350: Procedural building as Houdini Digital Asset assignment]]
*[[Procedural animation (Project 2)|VSFX 350: Procedural animation project]]
!!!!Potential assignments
*[[Rotary saw blade exercise|VSFX 350: Rotary saw blade assignment]]
*[[Lissajous curve|VSFX 350: Lissajous assignment]]
*[[Fizz-Buzz|VSFX 350: Fizz-Buzz assignment]]
!!!!Resources
*[[Houdini resources|Houdini: Links]]
*[[Proceduralism notes and resources|Proceduralism: Notes]]

!!!!Parameter and expression sharing for lectures
An on-line collaborative system has been prepared for sharing //Houdini// parameter values and expression formulas during the class. During demonstrations, parameter values will be published and be available in real time via a web browser.

[[Parameter sharing for VSFX 350|http://www.kennethahuff.com/teaching/CollaborativeText.html?mode=view&mob=kah_3501]]

These pages are erased automatically soon after each class session. You should copy the contents to a text file if you would like to keep the information for reference.

!!!!Documentation readings and links
Many of the links in these class notes are to //Houdini// documentation and will work only if //Houdini// currently is running on your workstation and you have accessed the Help system. (//Houdini// uses an embedded web server to manage documentation. That server is started when you first access the documentation.) """SideEffects""" also has published the //Houdini// [[documentation on the web|http://www.sidefx.com/index.php?option=com_content&task=view&id=1085&Itemid=281]] (in which case, example files will not be available).

As an on-going reading assignment, whenever we work with a new operator, subsystem or expression function in class, you should review its documentation. Many of these operators and functions will be highlighted in these class notes, but not all. Most will have additional useful features which will not be directly discussed in class.
/%
!!!!Software versions
In the labs and in class, I will be using //Houdini// version 11, which also is available on SCAD&rsquo;s render farm. If you are using version 10 for your class work, please place an empty text file, named {{{houdini10.txt}}}, at the top level of your drop box directory. If I see this file, I will evaluate your exercises and projects using version 10; otherwise, I will assume you are using version 11.

If you are emailing me a .hip file, also please let me know if you are using //Houdini// 10.
%/
!!!!Exercise grading
The grading for exercises is structured such that if you complete all exercises, meeting the minimum specifications, you will receive an 85%. If you make a reasonable attempt at all exercises, but take none of them to completion, you will receive a 70%. Missing exercises impose a penalty. Most, if not all of the exercises, can be expanded to include extra features. To move your grade above 85%, go beyond the specifications, demonstrate exploration, make evident a deeper understanding of the principles and employ visual creativity. Suggestions for additional features will be given with most exercises but do not limit yourself to these suggestions.
!!!!Exercise resubmissions
When submitting revisions to an exercise, use the following naming convention: {{{LastnameFirstname_Ex01_resubmit01...}}} Send me an email when you place resubmissions in the drop box. I will not find them spontaneously. Previous versions of the exercise remain in the drop box.

!!!!Drop box
The drop box is not to be used as personal storage. You should be submitting copies of the files that make up your exercises and projects. You should not be using the drop box as a working directory. Files and directories in the drop box which have not specifically been requested for an assignment may be deleted without notice. You have been warned (insert diabolical laughter here).
!!!!Introducing yourself to Houdini
In the //[[Houdini Help|http://localhost:48626/]]// documentation, work through the //[[Interface intro|http://localhost:48626/start/intro]]// section and the //Tutorial videos// on the //[[Welcome to Houdini|http://localhost:48626/start/]]// page (7 videos). The //[[Maya to Houdini transition guide|http://localhost:48626/start/maya_transition]]// section also may be useful.

The final video, //[[Node based workflow,|http://www.sidefx.com/images/stories/blogs/houdini10_blog/NodeWorkflow/procedural_forest.mov]]// contains a demonstration of a procedural forest, similar to the version we covered in class. Before class 2, you should attempt the project. ''Stop'' at the point in the video when a digital asset is created (at around 12 minutes; of course, you may watch it, but do not turn your version into a digital asset). We will cover digital assets later in the quarter. Be prepared for class 2 with your questions.
!!!!stamp() function clarification
When working through the //[[Node based workflow|http://www.sidefx.com/images/stories/blogs/houdini10_blog/NodeWorkflow/procedural_forest.mov]]// video, note that the {{{stamp()}}} function accepts strings for its first two arguments. Strings in HScript, one of Houdini&rsquo;s expression languages, are enclosed in double-quotation marks ({{{"}}}). In the video, some people mistake those quotation marks as asterisks ({{{*}}}). Therefore, a {{{stamp()}}} function might look like this
{{{
stamp("../copy1", "pointNumber", 0)
}}}
Not this
{{{
stamp(*../copy1*, *pointNumber*, 0)
}}}
!!!!Setting up a bash_custom file
You should prepare your [[bash_custom]] file before class 2. I also have placed a copy in _MATERIALS. If you already have a {{{bash_custom}}} setup, you should merge what I am providing with your current file.

As a reminder, I expect you to be working under Linux in class and to launch //Houdini// from the command line. You should not launch //Houdini// from the menu bar at the top of the screen or by double-clicking on a //Houdini// icon.
!!!!Exercise: [[Head shot]]
Before class 2, you should have a [[head shot|Head shot]] in place in your drop box.
!!!!Project 1
In preparation for [[Project 1|VSFX 350: Procedural building project]], you should find at least one reference image for your procedural building. The building may be an existing building, a proposed building or a building that you design. If you are designing your own building, I will expect that you prepare fairly detailed concept artwork before starting the implementation.
[[Project 1|VSFX 350: Procedural building project]] is due at the start of class 12. Today will be a review session.

Information has been posted regarding an upcoming [[extra help session|Extra help sessions]].

Here are some notes on [[shader development, lighting, rendering and tuning of shading quality.|Houdini: Shading notes]] These notes contains some information specific to [[project 1|VSFX 350: Procedural building project]] and supplement high-level notes given for [[class 8|VSFX 350: Class 8]].
[[Project 1|VSFX 350: Procedural building project]] is due at the start of next class. Today will be a review session.

''Important:'' At the Scene level of your .hip, you should include a sticky note that documents improvements/to-do items that you would like to make to your project and an additional sticky for known problems. This kind of &ldquo;self criticism&rdquo; is a crucial skill to develop.

A [[final bit of inspiration for your building projects|http://www.shorpy.com/node/9904?size=_original]].
[[Project 1|VSFX 350: Procedural building project]] is due at the start of class.

{{{PieWedge_001.hipnc}}} has been added to _MATERIAL. The file contains an example of [[multiple-line HScript expressions.|Houdini: Multiple-line expressions]]
!!!!Project 1 revisions
Based on the feedback that you received last class, your revisions on Project 1 are due today. You should append {{{_revised}}} to any files that you are submitting based on revisions. For example, {{{LastnameFirstname_Project1.mov}}} becomes {{{LastnameFirstname_Project1_revised.mov}}}.

If you submit anything for project 1 after 1:30 p.m. on Tuesday, 10 May, you should email me to let me know that your files have been updated.

!!!!Houdini Digital Assets
Houdini Digital Assets (~HDAs) allow us to encapsulate networks into our own custom operators. In the documentation, review the //[[Digital assets|http://localhost:48626/assets/]]// page. Pay particular attention to the following subtopics:
*//[[Anatomy of a digital asset|http://localhost:48626/assets/anatomy]]//
*//[[Create a digital asset|http://localhost:48626/assets/create]]//
*//[[Create a user interface for an asset|http://localhost:48626/assets/asset_ui]]//
*//[[Load and manage assets on disk|http://localhost:48626/assets/install]]//
We will go over the process in class.

''Assignment:'' [[Procedural building as Houdini Digital Asset|VSFX 350: Procedural building as Houdini Digital Asset assignment]] &mdash; This exercise should be completed by class 14.
The following has been added to _MATERIAL:
*{{{SurfaceDeformationBasedOnLuminance/}}} &mdash; This project contains a number of examples of techniques that can be used to deform a surface based on file-based images. It illustrates the use of the {{{tex()}}} expression function. It includes two examples of a ~GeoTIFF workflow and a VOP SOP to deform the surface. The VOP SOP version is much faster, allowing for the use of higher resolution images. The original ~GeoTIFF files were downloaded from the [[United States Geological Service|http://www.usgs.gov/]]. It also includes examples that use imported [[Digital Terrain Model|http://hirise.lpl.arizona.edu/dtm/]] data and false color altimetry imagery from the [[Mars HiRISE|http://hirise.lpl.arizona.edu/]] program. [[This blog post|http://hirise.lpl.arizona.edu/HiBlog/2010/01/20/first-pds-release-of-hirise-dtms/]] describes the ~HiRISE DTM data. I also included an example of exporting geometry using a ROP Output Driver SOP and then importing the geometry using a File SOP. This is one method which can be used to cache the results of a complex SOP network.

!!!!Procedural animation project
[[The specification for the procedural animation project|VSFX 350: Procedural animation project]] has been posted. By the start of class 15 (next class), you should have preliminary concept artwork in place in the drop box for review.
[[Famous Curves Index|http://www-history.mcs.st-and.ac.uk/Curves/Curves.html]] &mdash; The stories and formulas for some well-know curves.

!!!!Additional point of inspiration for procedural animation project
Kevin Webster&rsquo;s //[[metacosm project|http://rabidpraxis.com/projects/metacosm_project/]]//. Kevin has posted a number of the generated videos on [[vimeo|http://vimeo.com/kevinwebster/videos/]] and has followed up the project with a new work-in-progress, //[[the metacosm project redux|http://rabidpraxis.com/projects/metacosm_project_redux/]]//.

!!!!Channel Operators (~CHOPs) references
Two starting points for ~CHOPs information in the documentaiton: //[[Channel nodes|http://localhost:48626/nodes/chop/]]// and //[[Motion view|http://localhost:48626/ref/views/chopview]]//.

The following nodes and functions were used in class:
*~SOPs: Channel, CHOP Network Manager and Null
*~CHOPs: Export, Geometry, Math, Merge, Noise, Null, Shift and Wave
*Expression functions: {{{chop()}}} and {{{chopn()}}} (be sure to look over the other {{{chop*}}} functions as well)
If you are interested in the sound-related features of ~CHOPs and //Houdini//, Andrew Lowell&rsquo;s electronic book, //[[Simultaneous Music, Animation and Sound Techniques with Houdini|http://www.andrew-lowell-productions.com/andrew-lowell-productions/resources.html]]// is an excellent resource.

The following has been added to _MATERIAL:
*{{{CHOPsExamples/}}} &mdash; contains a number of small example files for ~CHOPs based networks. //More to come as we progress in class.//
!!!!Custom Python ~SOPs
In _MATERIAL, I have added a directory, {{{python_gis/}}}, which contains an example of custom Python ~SOPs in Houdini. See the {{{00_README_from_Ken.txt}}} file in the directory for more information. This is not something we will be covering in class, but I welcome any questions you might have.
If you are interested in music and/or sound visualization, you might enjoy the [[Create Digital Motion|http://createdigitalmotion.com/]] and [[Create Digital Music|http://createdigitalmusic.com/]] blogs. Lots of good stuff.

I have added a note about [[multiple-line expressions in Houdini|Houdini: Multiple-line expressions]], an apparently undocumented feature.

I have mentioned that the .hip file format is a representation of a file hierarchy. The {{{hexpand}}} and {{{hcollapse}}} command-line tools can be used to split a .hip file into its constituent parts and to put a directory structure that represents a .hip file together as a single file. Execute each command without any arguments to see some basic usage. These commands only work with files created using a commercial license of //Houdini//. {{{otexpand}}} and {{{otcollapse}}} //~HScript// commands provide similar manipulation of operator type library (.otl) files.

The //Houdini// ~CHOPs/LED demonstration that I did in class utilized a controller from [[Phidgets|http://www.phidgets.com/]].
There will be an [[extra help session|Extra help sessions]] this Saturday.

{{{CHOPs_Examples}}} has been updated with {{{CHOPsExample_061_PitchVisualization.hipnc}}}.
Of interest: [[Matt Ebb|http://mke3.net/]] has created a [[raytracer VOP SOP|http://mke3.net/weblog/raytracer-vopsop/]] and has posted some videos: [[one|http://vimeo.com/20700092]] and [[two|http://vimeo.com/22438117]]. [[Another video|http://vimeo.com/21436831]] with a link to the .hipnc file in the comments.
Today&rsquo;s class was brought to you by the following """SOPs""" (Surface Operators): """AttribCreate""", Box, Copy, Grid, """LSystem""", Merge, Mountain, Paint, Scatter, Switch and Transform.

In the //[[Houdini Help|http://localhost:48626/]]// documentation, review the pages linked to in the “Getting started” section of the //[[Basics|http://localhost:48626/basics/]]// page. The //pscale// attribute that was added in order to vary the scale of the trees in the procedural forest is described briefly in //[[Instancing point attributes|http://localhost:48626/copy/instanceattrs]]// as part of the //[[Copying and instancing|http://localhost:48626/copy/]]// documentation.

[[Here is a link to a post on the SideEffects forums|http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=6679&highlight=copy+sop]] regarding some of the more obscure and poorly-documented features of copying and instancing.

Here is an //Old School Blog// post about [[point instancing|http://www.sidefx.com/index.php?option=com_content&task=view&id=1050&Itemid=216]] and the new {{{instancepoint()}}} expression function (>= Houdini 9). Here is a tutorial by Peter Quint on [[instancing lights|http://vimeo.com/8681435]].

Also of interest is the //[[Edit Parameter Interface window|http://localhost:48626/ref/windows/edit_parameter_interface]]//.

You should do a cursory review of //[[Expression functions|http://localhost:48626/expressions/]]// and //[[Global expression variables|http://localhost:48626/expressions/_globals]]//. We will revisit these topics in greater detail throughout the quarter.

''Assignment:'' Exercise 1 &mdash; [[Procedural forest|VSFX 350: Procedural forest assignment]]

[[A recent customer story|http://www.sidefx.com/index.php?option=com_content&task=view&id=1694&Itemid=68]] on the """SideEffects""" web site highlights the use of Houdini by Framestore in //Avatar.// Pay particular attention to the last six paragraphs. Procedural forests and Copy """SOPs""", oh my!

As possible reference for [[Project 1|VSFX 350: Procedural building project]], [[Philip Dujardin|http://www.filipdujardin.be/]] creates photographic images of imagined architecture. Here is [[a post on we-find-wildness.com|http://www.we-find-wildness.com/2010/09/filip-dujardin/]] with some images of his work.
The [[procedural animation project|VSFX 350: Procedural animation project]] is due at the end of class today.
The [[Project 1|VSFX 350: Procedural building project]] specification has been finalized.

Review the //Parameter and expression sharing for lectures// information in the main [[VSFX 350]] notes.

See the //Morphogenesis (and ~L-Systems)// section of  [[Proceduralism notes and resources|Proceduralism: Notes]] for additional references on ~L-Systems.

A [[recent video masterclass|http://www.sidefx.com/index.php?option=com_content&task=view&id=1810&Itemid=305]] from ~SideEffects gives a very good overview of Python in //Houdini// with emphasis on //Houdini// version 11. (This simply is a reference for those who are interested in Python. It is //not// an assignment for VSFX 350.)
By today&rsquo;s class, you should have the [[procedural forest exercise|VSFX 350: Procedural forest assignment]] completed. After today&rsquo;s class, if you make any changes to your exercise, either on your own or at my request, you must send me an e-mail letting me know that I should reevaluate the exercise.

I have written a tip for [[managing crashes and freezes in Houdini|Houdini: Managing crashes and freezes]].

On the ~SideEffects //Old School Blog,// there are three articles related to painting point/particle density: [[one|http://www.sidefx.com/index.php?option=com_content&task=view&id=1030&Itemid=216]], [[two|http://www.sidefx.com/index.php?option=com_content&task=view&id=1032&Itemid=216]] and [[three|http://www.sidefx.com/index.php?option=com_content&task=view&id=1040&Itemid=216]].

The following example files have been added to _MATERIAL:
*{{{ControllingScatterSOPWithPainting_001.hipnc}}} &mdash; A variation on the technique described in the ~SideEffects //Old School Blog// posts mentioned above. The main difference is that I use the term &ldquo;density&rdquo; rather than &ldquo;area&rdquo;.
*{{{CutOutStairs_001.hipnc}}} &mdash; An example using Add ~SOPs and a Cookie SOP to construct a simple stair structure. Fun with connecting the dots.
*{{{SpiralStaircase_001.hipnc}}} &mdash; A spiral staircase with railing. Pay particular attention to the use of the Group Geometry SOP. We will cover this in detail in class 5.
!!!!Example files
The following example files have been added to _MATERIAL:
*{{{CorrugatedPanel_001.hipnc}}} &mdash; a basic Group Geometry SOP example.
*{{{FlaredRoof_001.hipnc}}} &mdash; The very simple beginnings of a flared roof line.
*{{{ProceduralBuildings_FootprintDeformation.hipnc}}} &mdash; A house with a procedural footprint.
!!!!References
*[[Group specification patterns/strings|http://localhost:48626/model/groups#manual]] &mdash; A table which documents the syntax for the Group parameter available on many nodes. Note that you can have multiple group patterns in a single Group parameter. The final result will be the combination of all of the patterns, as interpreted from left to right in the parameter string.
The following example files have been added to _MATERIAL:
*{{{VolumesOfCubes_001.hipnc}}} &mdash; Shows the use of a color ramp parameter and the corresponding //[[chramp()|http://localhost:48626/expressions/chramp]]// expression function; The [[Points From Volume SOP|http://localhost:48626/nodes/sop/pointsfromvolume]] used in this file is an example of a built-in operator that is a digital asset. Dive in and take a look.
*{{{Window_001.hipnc}}} &mdash; A wall with lots of configurable windows.
*{{{CatenaryArchWithMetaballs_001.hipnc}}} and {{{CatenaryCurves-Slides.pdf}}}&mdash; Some fun with catenary curves and metaballs.
**[[Metaball SOP documentation.|http://localhost:48626/nodes/sop/metaball]]
**//Wikipedia// pages for [[catenary curves|http://en.wikipedia.org/wiki/Catenary]]
**Euler&rsquo;s Number, //[[e|http://en.wikipedia.org/wiki/E_(mathematical_constant)]]// (Euler is pronounced, &ldquo;~OY-ler&rdquo;)
**[[The Upside Dome: Catenary curves as wireframe in an architectural sculpture installation.|http://www.gijsvanvaerenbergh.com/theupsidedome/]]
''Resource:'' Graham Thompson&rsquo;s [[houdinitoolbox.com|http://www.houdinitoolbox.com/]] &mdash; This one is relatively new, but already there is a great deal of good material.
''[[Project 1|VSFX 350: Procedural building project]] will be due at the start of Class 12.''

!!!!Surface normals in Houdini
New in //Houdini 11// is an option to create surface normal data as a vertex attribute. Previously, normals only could be stored as point attributes. The [[Vertex SOP|http://localhost:48626/nodes/sop/vertex]] is used to manipulate this vertex normal data. An important option on the SOP is //Cusp Normal// which can be used to generate normals based on the relative angles of adjacent faces.

This new feature will allow for the removal of many [[Facet SOPs|http://localhost:48626/nodes/sop/facet]] previously used with their //Unique Points// parameter and //Pre-/~Post-Compute Normals// parameters to generate explicit normals. Diffuse color, alpha and texture coordinates (~UVs) also can be stored on vertices. Each of these attributes can exist only on points or vertices. For example, you cannot have both point and vertex normals on the same geometry.

!!!!Look development
Here is a note on [[command line rendering with Houdini|Houdini: Command line rendering]].

Top-level documentation pages for look development: //[[Lighting|http://localhost:48626/light/]]//, //[[Shelf Tools: Lights and Cameras tab|http://localhost:48626/shelf/lightsandcameras]]//, //[[Shading|http://localhost:48626/shade/]]// and //[[Rendering|http://localhost:48626/rendering/]]//.

Here is a note regarding [[camera and animation setup|VSFX 350: Procedural building project (camera and animation setup)]] appropriate for the project.

In preparation for look development in //Houdini//, you should be sprinkling your procedural building networks with [[Group SOPs|http://localhost:48626/nodes/sop/group]] that define primitive groups for later material assignment. Use {{{Window_001.hipnc}}} in _MATERIAL as an example.

!!!!Additional bits
Remember, you should have your entire building SOP network contained within a single ~SubNetwork. That ~SubNetwork will be inside a Geometry node (which in turn exists at the scene level). You should place your highest-level spare parameter controls on the ~SubNetwork node.

In preparation for your [[Project 1|VSFX 350: Procedural building project]] submissions, there is a [[note regarding some methods for video screen capture.|Video screen capture]]

cmiVFX has [[a video specifically about procedural buildings.|http://www.cmivfx.com/productpages/product.aspx?name=Houdini_Building_Generation]] While I have yet to view the video, I have been very satisfied with previous videos from the company.

''Off topic:'' If you are having the problem of your editor windows disappearing from //Maya 2011// when you switch between single- and dual-monitor workstations, I have posted [[a fix here.|Maya dual-monitor fix]]
Reconstruct the scene that was shown in class. A screen capture image showing the custom parameter interface, some rules and some hints is here. This exercise is a review of some of the expression operators, functions, syntax and idioms we have covered up to this point. It can be accomplished with as few as four ~SOPs, but that is not a requirement.

A copy of {{{FizzBuzz_Instructions_002.png}}} can be found in _MATERIAL or [[here.|inclusions-2010-fall/vsfx350/FizzBuzz_Instructions_002.png]]

You will need the following ~SOPs: //Font//, //Point// or //Color// for color changes (//Material// ~SOPs would be overkill), //Copy// and //Transform//. You might also end up with some //Switch// ~SOPs, but anything you do with a //Switch// also could be done with some expression logic. You will use some or all of the following expression functions: {{{ch()}}}, {{{chs()}}}, {{{if()}}}, {{{ifs()}}}, {{{int()}}}, {{{stamp()}}} and {{{trunc()}}}. You also will use various math operators (+, -, *, / and %) and some logic operators (<, >, <=, >=, !=, ==, ||, && and !).

The naming convention for the submission is {{{LastnameFirstname_FizzBuzz.hip}}}. Submit only the .hip file.

For additional examples of expressions, see the [[Expression cookbook|http://localhost:48626/ref/expression_cookbook]] in the //Reference// section of the documentation. (All of the image links are broken on this page, but the information is pretty good. Be careful of the Deformation examples as there is at least one typographical/formatting errors.)

Note: For the //Font// SOP&rsquo;s Text parameter, special handling is necessary for expressions. This is necessary for any string parameter. Local and global variables, such as {{{$F}}} and {{{$HIP}}}, will expand to their stored values. For example, if the Text parameter is set to {{{Frame: $F}}}, the text geometry generated will expand to {{{Frame: 23}}} on the 23rd frame. If you would like to use an expression for the string, such as {{{ch("../../path/to/a/parameter")}}}, you need to enclose the expression in backticks ({{{``}}}). For example, {{{1 + 1}}} will result in the literal text, {{{1 + 1}}}. If you use {{{`1 + 1`}}} however, you will get {{{2}}} as the result string.
For this exercise you will be implementing a [[Lissajous curve|http://en.wikipedia.org/wiki/Lissajous_curve]] in //Houdini//. (Here is [[an alternate reference.|http://mathworld.wolfram.com/LissajousCurve.html]])

The specific formulas you should use are documented in {{{LissajousFormula.pdf}}} in _MATERIAL. Your network primarily should consist of a Line SOP connected into a Point SOP. You will use the Position parameter of the Point SOP to implement the formulas. The entire network should be contained in a ~SubNetwork. Your spare parameters should be added to the ~SubNetwork node.

The exact configuration of the curve will vary based on the values in the formulas. The number of points in your original line also will affect the overall shape of the resulting curve. You should include a high level parameter that controls the number of points.

While you should work within a project directory as you develop your work, you will submit only the final .hip file in the drop box. Use this naming convention for the submitted file:

&nbsp;&nbsp;&nbsp;{{{LastnameFirstname_Ex03_Lissajous.hip}}}

Additional features, functionality and exploration will result in credit toward an overall grade of A for exercises. Suggestions include:
*Inclusion of features which change over time
*Application of color
*Creation of additional geometry based on the Lissajous curve
*Feel free to experiment with the formula as well. If you do, make a copy of the original subnetwork and rename it {{{original_formula}}} and place your experiments in adjacent subnetworks.
[TODO: under development]

what is class...

*install houdini 11
*getting started movies (copy from first day notes)
*side effects go procedural
This project will focus on procedural animation, specifically expression-based and ~CHOPs-based animation in //Houdini//. Keep in mind that this project is not to be based on dynamic simulations. Channel data from ~CHOPs and expressions will be the dominant sources of animation for the project. Animation based on keyframes should be kept to a minimum. General themes might include:
*A virtual kinetic sculpture (see below for references)
*A sound-reactive animation
*An animation with sound generated in //Houdini// (be aware that using a non-commerical license affects sound quality)
*Any other animated subject matter that you propose (other than dynamic simulations)
If you would like to create a virtual kinetic sculpture, you are encouraged to work from your own design. You may, however, recreate, with proper attribution, a sculpture from reference. Note that a work of your own design is much more appropriate for use on a demo reel.

!!!!References
Some suggested starting points for research and inspiration (in no particular order and by no means exhaustive):
*[[Tim Prentice|http://www.timprentice.com/]]
*Daniel Rozin ([[personal site|http://www.smoothware.com/danny/]], [[Google Videos|http://video.google.com/videosearch?q=daniel+rozin+wooden+mirror#]], etc.)
*[[Arthur Ganson|http://www.arthurganson.com/]] and [[a TED talk by Mr. Ganson|http://www.ted.com/talks/arthur_ganson_makes_moving_sculpture.html]]
*[[Theo Jansen|http://www.strandbeest.com/]], a [[TED talk by Mr. Jansen|http://www.ted.com/talks/theo_jansen_creates_new_creatures.html]] and a [[Pop!Tech talk|http://www.poptech.org/popcasts/theo_jansen__poptech_2005]]
*[[Kinetica Museum|http://www.kinetica-museum.org/]] (London) (see artist profiles)
*[[Kinetic Sculpture at BMW Museum project by ART+COM|http://www.artcom.de/index.php?option=com_acprojects&page=6&id=62&Itemid=144&details=0&lang=en]]
*//Cloud//, an [[installation at London Heathrow Airport by Troika|http://troika.uk.com/cloud]]
*[[Wind works by Ned Kahn|http://nedkahn.com/wind.html]]
*[[Flyfire|http://senseable.mit.edu/flyfire/]]
*[[Painting with light|http://www.google.com/search?q=painting+with+light]] &mdash; [[Spinning LEDs|http://www.youtube.com/watch?v=79WzI-v1qJk]]; [[PiKA PiKA|http://tochka.jp/pikapika/]]; [[Variations on Pi|http://www.nilsvoelker.com/content/variationsOnPi/]] (one this last reference, be sure to watch the video at the bottom of the page)
*If you are interested in music and/or sound visualization, you might enjoy the [[Create Digital Motion|http://createdigitalmotion.com/]] and [[Create Digital Music|http://createdigitalmusic.com/]] blogs. Lots of good stuff.

!!!!Sound-based projects
If you are interested in the sound-related features of ~CHOPs and //Houdini//, Andrew Lowell&rsquo;s electronic book, //[[Simultaneous Music, Animation and Sound Techniques with Houdini|http://www.andrew-lowell-productions.com/andrew-lowell-productions/resources.html]]// is an excellent resource.

Peter Quint has a number of tutorial videos for ~CHOPs and sound-based projects, //[[CHOPs and Music Driven Animation I|http://vimeo.com/6930074]]// is the first of a series. His //[[Turbulent Trails|http://vimeo.com/5891547]]// series and //[[Refining the Trails Effect|http://vimeo.com/5943053]]// series might also be of interest.

!!!!Suggestions and requirements for all projects
*Include a high level parameter that allows for the slowing down and speeding up of the timing of your rig. For example, if your project was a clockwork mechanism, the user should be able to create a &ldquo;time-lapse&rdquo; effect with this high level parameter.
*With whatever reference you choose, study the motion that you see in the individual elements and in the whole &ldquo;system&rdquo;, analyzing the motion for both periodic and aperiodic patterns. Examine the structural hierarchy of the reference. How would you implement that structure? How does the structure affect the motion? What external forces are acting on the works?
*If your project includes multitudes of similar objects (e.g., the gears in a clock), you should create formal //Houdini// Digital Assets for those objects.

!!!!Concept artwork
You will be producing preproduction concept artwork for this project. You may produce this artwork in any medium, but will be submitting digital images. The concept artwork should convey the overall idea of the piece, indicate major and minor motion, document structural details and provide a key to the basic materials. These images will be submitted in a {{{concept/}}} sub-directory of the project submission, following the specifications below. This is in addition to the {{{reference/}}} described below.

!!!!Submission requirements
The project will be submitted as a directory, {{{LastnameFirstname_Project2/}}}. The directory should contain:
*The .hip file for your project, {{{LastnameFirstname_Project2.hip}}}
**At the Scene level of your .hip, you should include a //~To_Do// sticky note that documents improvements that you would like to make to your project.
**Also at the Scene level, and possibly throughout your networks, you should include //~Known_Problems// stickies to indicate issues which are broken or that might break your building under specific circumstances.
**This kind of &ldquo;self criticism&rdquo; is a crucial skill to develop.
*A plain text file, {{{sources.txt}}}, indicating primary references/influences for the project and the source of the submitted reference images, including appropriate ~URLs. Think of this as the bibliography for your reference images. Include ~URLs to any on-line video documentation that you used as reference.
*An optional plain text file, readme.txt, containing any information that you think is critical to my understanding of the project.
*A directory, {{{reference/}}}, that contains no more than 10 ~JPEGs of your reference, no larger than 2,000 by 2,000 pixels. There is no set naming convention for the reference images. If your references are video, include some still image captures from the videos.
*A directory, {{{concept/}}}, that contains no more than 10 ~JPEGs of your design, no smaller than 1,000 by 1,000 pixels and no larger than 2,000 by 2,000 pixels. There is no set naming convention for the concept images. See above for information regarding what should be included in these images.
*If your project includes file-based textures, a directory, {{{textures/}}}, containing any texture images used in your project. Important: In your ~SHOPs/Material specifications, when entering file paths for textures, be sure that the paths are relative to the {{{$HIP}}} global variable (e.g., {{{$HIP/textures/filename.pic}}}) and not absolute paths. Use a {{{textures/}}} directory in your project folder while working on this project so that the texture file references are the same for both your working project files and the submission files.
*A ~QuickTime movie, {{{LastnameFirstname_Project2.mov}}}, containing a minimum of 15 seconds of animation, 30 frames per second, high-quality H.264 compression, 1280x720 pixels (720x480 if you are rendering on your personal workstation and using //Houdini Apprentice//).
**The animation should consist of at least two distinct shots.
**The movie also should contain a technical breakdown of your project.
**Additional animation is allowed at your discretion.
**Include an opening title slate showing your name and &ldquo;VSFX 350///Quarter Year///Project 2&rdquo;.
**''Important:'' If your project is a recreation of an existing artwork, you must include a second title slate that gives attribution to the original artist.
**''Important:'' If you project includes a soundtrack, you must include attribution as well.
*A TIFF or PNG file, {{{LastnameFirstname_Project2.[tif|png]}}}, no larger than 1,500 pixels in either dimension, with at least one dimension of 1,500 pixels. The exact aspect ratio is at your discretion. This image should be a //beauty shot// of your project, either in its base configuration or as a procedural variant. The file should be saved flattened, RGB, no alpha channels. Please use LZW compression for ~TIFFs.
*{{{readme.txt}}}, an optional plain text file containing any additional information that you think I should know when evaluating your project.
''Important:'' Precise adherence to these naming and format conventions constitutes 10% of the project grade. Justifiable deviations will be tolerated. Sloppiness will not. Missing elements could severely affect the project grade. Double-check your TIFF files.

!!!!Project deadline
Project 2 will be due at the end of Class 20, but you should be prepared to present your work at the start of class. This is a firm, final deadline (unlike exercise deadlines). Plan accordingly.
For this assignment, convert your procedural building from [[project 1|VSFX 350: Procedural building project]] into a Houdini Digital Asset (HDA). The HDA should contain only your building. If you created ground, environment, trees, people, dogs, horses, cows, birds or anything else in your project 1 scene, they should not be included in the asset. Only the building. Resting on the XZ plane (base of building at Y = 0), centered on the origin. With all of the spare parameter controls of the original, but no keyframe or expression animation.

If you have not already done so for the original project, you should embed any materials that you used inside a SHOP Network Manager inside the ~SubNetwork that you will turn into the digital asset. Be sure to update any material paths to the new location. If you included texture maps in your original project, please omit them and modify the network accordingly for the exercise.

Your digital asset(s) should be created at the SOP level, not the scene level.

When you create the digital asset, follow this naming convention for the //Create New Digital Asset from Node// dialog box:
*Operator Name: {{{P1_LastnameFirstname}}}
*Operator Label: {{{P1 Lastname Firstname}}}
*Remember that operator names, like parameter names, //cannot// contain spaces, but that operator labels, like parameter labels, //can// contain spaces. In the //Save to Library// field, I recommend that you save the OTL in an {{{otls/}}} directory inside a project folder rather than in the default location.

Test your HDA by bringing it into a new scene. Pay attention to the default parameter values. When you first add the digital asset to your new scene, the building should be in a valid, default configuration.

!!!!Submission structure
Use the following file structure for your submission:
*{{{LastnameFirstname_BuildingHDAs/}}} &mdash; A directory containing your submission.
*{{{LastnameFirstname_Building.otl}}} &mdash; An .otl file containing the primary building HDA.
*//{{{YourNamingConventionHere.otl}}}// &mdash; Additional .otl files for your submission (see below).

Note that this naming convention for the .otl, {{{LastnameFirstname_Building.otl}}}, probably is not what you would normally use in day-to-day production. Typically, you would name the .otl file with the same string as the //Operator Name//.

For additional credit on this assignment, create digital assets for the modular elements of your building. For example, if you created a subnetwork that was responsible for constructing all of the window geometry of your asset, turn that into a digital asset and integrate it into the main asset.
For this project, you will create a procedural system for generating the exterior of a building. There are numerous examples of procedural buildings and city generation on the Internet. [[This video|http://blip.tv/file/569059]] from [[Pascal Mueller,|http://www.vision.ee.ethz.ch/~pmueller/research.html]] et al. at [[Procedural Inc.|http://www.procedural.com/]] should serve as a point of inspiration for the exercise. You will be shown examples of previous projects in class.

The following are the minimum requirements for the project:
*The dimensions of the building, both footprint and height, should be changeable through high-level parameters. We will discuss the breakdown of your building into &ldquo;modules&rdquo;. You may resize the building based on the number of these modules, rather than assign absolute measurements. For example, you could define the height of the building based on the number of stories rather than a height such as 120 meters.
*The building should have windows, the number of which is tied to the size of the building.
*There should be at least one entrance.
*There should be a roof structure that is distinct from the main body of the building.
*The building should have multiple material assignments. You may incorporate file-based textures, but they are not required.
*Rendering should be production quality.
*The scene file should be well organized and documented with network boxes, sticky notes, informative node names and comments.

Create your entire building inside a SOP ~SubNetwork, in preparation for turning it into an official Houdini Digital Asset. This will be done as an exercise following the completion of the project.

''You also may be given additional, individual requirements for your project based on your reference building.''

You will work from reference for this project. I suggest that you find and photograph a building as reference, but you also may use found images of existing, proposed or imagined buildings. If you would like to design your own building, you should submit concept artwork in lieu of reference images. While you are not required to replicate all of the minute detail of your reference, the basic forms should be recognizable and be proportional when the building is in its base configuration. Do not include details in the model which should be addressed via texturing, such as brickwork. A building for which you only can find a single reference image likely will not be appropriate for this project &mdash; the more reference you have, the better.

!!!!Resources
In //[[Houdini Help|http://localhost:48626/]]//, you may find the //[[Modeling|http://localhost:48626/model/]]// (the //Getting Started// topics, including //[[“Box up” modeling|http://localhost:48626/model/box_up_modeling]]//) section useful if you would like to add detail beyond simple geometry.

The ''library'' can been an excellent resource for both general and specific reference. Some students have found entire books dedicated to their subject building.

A book that I find useful whenever dealing with architectural subjects, specifically nomenclature, is Francis Ching&rsquo;s //[[A Visual Dictionary of Architecture|http://www.amazon.com/Visual-Dictionary-Architecture-Francis-Ching/dp/0471288217/]]//. At some point during the project, I will bring the book to class. It also is available in the reference section of the library.

Here is a note regarding [[camera and animation setup|VSFX 350: Procedural building project (camera and animation setup)]] appropriate for the project.

Here is a note on [[command line rendering with Houdini|Houdini: Command line rendering]].

Here are some notes on [[shader development, lighting, rendering and tuning of shading quality.|Houdini: Shading notes]]

!!!!Submission requirements
The project will be submitted as a directory, {{{LastnameFirstname_Project1/}}}. The directory should contain:
*{{{LastnameFirstname_Project1.hip}}}, your final .hip file.
**At the Scene level of your .hip, you should include a //~To_Do// sticky note that documents improvements that you would like to make to your project.
**Also at the Scene level, and possibly throughout your networks, you should include //~Known_Problems// stickies to indicate issues which are broken or that might break your building under specific circumstances.
**This kind of &ldquo;self criticism&rdquo; is a crucial skill to develop.
*{{{source.txt}}}, a plain text file, which indicates the location of the building, the name of the architect (if known) and the source of the submitted reference images, including appropriate """URLs""". Think of this as the bibliography for your building and reference images.
*{{{reference/}}}, a directory that contains no more than 10 """JPEGs""" of your reference, no larger than 2,000 by 2,000 pixels. There is no set naming convention for the reference images.
*If your project includes file-based textures, a directory, {{{textures/}}}, containing any texture images used in your project. ''Important:'' In your """SHOPs"""/Material specifications, when entering file paths for textures, be sure that the paths are relative to the {{{$HIP}}} global variable (e.g., {{{$HIP/textures/filename.pic}}}) and not absolute paths. Use a {{{textures/}}} directory in your project folder while working on this project so that the texture file references are the same for both your working project files and the submission files.
*A """QuickTime""" movie, {{{LastnameFirstname_Project1.mov}}}, containing a minimum of 10 seconds of animation, 30 frames per second, high-quality H.264 compression, 1280x720 pixels (720x480 if you are rendering on your personal workstation and using //Houdini Apprentice//).
**The first 5 seconds should be a turntable of the base model, rotating at no more than 10&deg; per second.
**The next 5 seconds should demonstrate the range of variation possible when modifying the building&rsquo;s parameters. For this portion of the animation, there should be little or no camera movement and little or no rotation of the building.
**The movie also should contain a screen capture of the custom parameter interface of your procedural building, shown for a minimum of 1 second.
**Additional animation and technical breakdown is encouraged at your discretion and will count toward improving your grade.
**Include an opening title slate showing your name and &ldquo;VSFX 350/Winter 2011/Procedural Building&rdquo;.
*A TIFF file, {{{LastnameFirstname_Project1.tif}}}, no larger than 1,500 pixels in either dimension, with at least one dimension of 1,500 pixels. The exact aspect ratio is at your discretion. This image should be a “beauty shot” of your building, either in its base configuration or as a procedural variant. The file should be saved flattened, with LZW compression, RGB, no alpha channels.
*{{{readme.txt}}}, an optional plain text file containing any additional information that you think I should know when evaluating your project.

''Project 1 will be due at the start of Class 12.''
In _MATERIAL, the {{{CameraAnimationExample_001.hipnc}}} file contains an example of a simple camera/scene setup that would be appropriate for [[project 1|VSFX 350: Procedural building project]].

A //Null// object is the parent of the camera. The animation, rotation around the Y axis at a rate of 10 degrees per second, is based on an expression ({{{10 * $T}}}) on that //Null// object.
!!!!Animating your building parameters
''Important: Before you create keyframe animation,'' you need to set the length of your timeline and the frame rate for your animation. Click on this button in the lower left corner of the main //Houdini// window:

[img[Global Animation Options|inclusions-2010-fall/vsfx350/GlobalAnimationOptions.png]]

In the resulting //Global Animation Options// dialog, set the //FPS// and //End// values appropriately. When you click on //Apply// or //Close//, you will be presented with a warning about &rdquo;altering channels in order to fit the new frame range&ldquo;. At this point, you may safely select either option, //Stretch Channels// or //No//. Once you have started creating keyframes, this potentially become destructive. (We will discuss this more when we start working in earnest with animation.)

!!!!Keyframe animation
The //[[Animation|http://localhost:48626/anim/]]// documentation page is the starting point for review of keyframe-based animation. Important subtopics include //[[Animation channels|http://localhost:48626/anim/channels]]//, //[[Animation basics|http://localhost:48626/anim/basics]]//, //[[Scoping parameters|http://localhost:48626/anim/scope]]// and //[[Edit keyframes and channels|http://localhost:48626/anim/edit]]//. The //[[Channel Editor|http://localhost:48626/ref/panes/chaneditor]]// is the main pane for interacting with keyframe animation data. Be sure to review the subtopics on the //[[channel graph (contains links to the channel segment functions)|http://localhost:48626/ref/panes/changraph]]//, //[[channel spreadsheet|http://localhost:48626/ref/panes/chantable]]// and //[[dopesheet|http://localhost:48626/ref/panes/dopesheet]]//. The //[[Channel List|http://localhost:48626/ref/panes/chanlist]]// from the //Channel Editor// also is available as a separate pane.

For keyframe animation of your building parameters, you can RMB+click on the parameter and select //Expressions and Keyframes -> Set Keyframe//. You will notice that the parameter field turns green, indicating a keyframe/expression on the parameter at that point in time (see the documentation for other color-coding).

If you LMB+click on the parameter label to toggle between value and expression, you will see this expression: {{{bezier()}}}. This is the //channel interpolation// function which determines the blending of values between keyframes. Every keyframe holds the channel interpolation function for the animation curve starting at that keyframe and continuing until the next keyframe.

If you are animating a high-level parameter that is defined as an integer, such as the number of floors in your building, //Houdini// will, somewhat inconveniently, interpolate between the keyframes as if it were a floating point number. To solve this, rewrite the expression as
{{{
int(linear())
}}}
You will need to do this for every keyframe on an integer parameter. The {{{int()}}} function truncates the number, removing the fractional portion and the {{{linear()}}} causes straight-line interpolation between the keyframes (versus the ease-out-ease-in shape of the {{{bezier()}}} function).

For example, if your building has a high-level parameter, //Number of floors//, and you want to animated it from 2 to 10 over 5 seconds, you can set keyframes at frame 1 and frame 300 (assuming 30 frames per second). As you do this, replace the expressions on each keyframe. //Houdini// now will step through evenly on whole floor numbers (2, 3, 4, 5, etc.) over the 10 seconds rather than producing fractional floor numbers (2.00027, 2.00107, 2.0024, etc.).

This is how the animation curves would compare, with {{{bezier()}}} in orange and {{{int(linear())}}} in green:

[img[Channel interpolation function comparison|inclusions-2010-fall/vsfx350/Animation-bezier-v-int-linear.png]]
As an exercise, before class 4, you should implement the procedural forest project. Minimally, you should include the following:
*Modification of the parameter interface of the geometry container object (including the promoting of parameters from within the network and the creation of any appropriate spare parameters)
*Randomization of the """L-System"""
*Toggling between proxy and final geometry for the trees
*Paintable tree scale
*Appropriate, documenting node names, comments and stickies

''Important:'' Do NOT turn the forest into a digital asset. This is a topic we will cover later in the quarter.

Additional features, functionality and exploration will result in credit toward an overall grade of A for exercises. Suggestions include:
*Addition of multiple vegetation types
*Additional paintable attributes
*Application of color
*Inclusion of rocks or other natural elements
*Expressions to tie the Rows and Columns of the Grid SOP to its Size and a spare parameter on the geometry container object for geometry density
*Leaf geometries for the plants

Once you have a working file, you should start a new file from scratch, attempting to recreate the system without reference to the previous file (and possibly without reference to notes). If you can do so, you are well positioned moving forward.

While you should work within a project directory as you develop your work, you will submit only the final .hip file in the drop box. Use this naming convention for the submitted file:

&nbsp;&nbsp;&nbsp;{{{LastnameFirstname_Ex01_Forest.hip}}}

As an example, {{{HuffKen_Ex01_Forest.hip}}} is correct. {{{khuff_EX1.hip}}}, {{{HUFF_1forest.hip}}}, {{{huffken_Ex01.hip}}}, {{{HuffKen_EX01.hip}}}, etc. are not.
Based on a reference image, recreate the profile of a [[blade for a circular saw.|http://en.wikipedia.org/wiki/Saw]] A [[Google image search|http://images.google.com/images?hl=en&source=imghp&biw=1440&bih=764&q=circular+saw+blade&gbv=2&aq=f&aqi=g10&aql=&oq=]] should turn up many options.

The minimum requirement for this assignment is to recreate the silhouette profile of the blade as a planar surface.

You should have a high-level control that allows for the resizing of the blade, keeping roughly the same size and shape of the teeth while increasing the radius of the blade. It follows that, as the radius increases, so to will the number of teeth on the blade.

Additional features, functionality and evidence of exploration will result in credit toward an overall grade of A for exercises. For this exercise, the closer you come to recreating your reference blade and the more detailed your result, the better your grade will be. Adding features such as blade thickness, the metal plates attached to the teeth, the aerodynamic cut out shapes, and the expansion slots that appear between some teeth of some blades will count toward improving your grade.

While you should work within a project directory as you develop your work, you will submit only the final .hip file //and your reference image// in the drop box. Use this naming convention for the submitted files:
*{{{LastnameFirstname_Ex02_SawBlade.hip}}}
*{{{LastnameFirstname_Ex02_SawBlade_Reference.jpeg}}}
''These are bits and pieces that may or may not appear in the official notes for the class.'' An internal holding ground for leftovers.

''Assignment:'' [[Rotary saw blade exercise|VSFX 350: Rotary saw blade assignment]] &mdash; You should have this exercise ready by next class (class 6).

''Assignment:'' [[Fizz-Buzz exercise|VSFX 350: Fizz-Buzz assignment]]

''Assignment:'' [[Lissajous curve exercise|VSFX 350: Lissajous assignment]]. You should plan to have this exercise completed by class 8 (next class).

!!!!Notes from today&rsquo;s in-class exercise
//Here are the notes that I projected in the last half of class. These will make sense only if you were in class today. For those who were absent, move directly on to the [[Fizz-Buzz exercise|VSFX 350: Fizz-Buzz assignment]].//

if the box is the first box, make it blue OR
if the box is the last box, make it blue OR
if there are an odd number of boxes AND the box is the middle box, make it blue
otherwise, make the box red

To center (Transform SOP Translate X parameter): {{{($SIZEX / 2.0) - $XMAX}}}

{{{&&  ||  !}}} &mdash;  logical AND, OR and NOT
{{{<  >  ==  <=  >=  !=}}} &mdash;  comparison operators
{{{%}}} &mdash; modulus function
{{{$CY  $NCY}}} &mdash; useful local variables on a Copy SOP

{{{int()}}} &mdash; converts a floating point number into an integer by truncating the fractional value; {{{1.4}}} becomes {{{1}}} and {{{1.9999}}} also becomes {{{1}}}

The following have been updated in or added to _MATERIAL:
*{{{CHOPsExamples/}}} &mdash; a number of examples have been added, including:
**{{{CHOPsExample_WarpCHOP.hipnc}}} &mdash; demonstrates the use of a //[[Warp CHOP|http://localhost:48626/nodes/chop/warp]]// to reverse, in time, channel data. It also can be used to speed up and slow down channel data, possibly by non-uniform rates.
*{{{PaintingWithLight/}}} &mdash; Three examples (each with a current movie showing the results of the scene):
** [[Trail SOP|http://localhost:48626/nodes/sop/trail]] to create motion-blurred light-like trails.
** [[Trail SOP|http://localhost:48626/nodes/sop/trail]] to create motion-blurred light trails which also are used as [[geometry lights|http://localhost:48626/shelf/geolight]].
**A motion blur example (without trails).
*{{{WoodenMirror/}}} &mdash; ~CHOPs based system for animating the orientation of a grid of geometries based on an image sequence.

!!!!Update for Section 2 (M/W)
After we met on Monday, the {{{WoodenMirror/}}} example has been updated for //Houdini 11// materials and rendering. I also have added two additional examples to {{{PaintingWithLight/}}}, one based on motion blur and another which utilizes [[geometry lights|http://localhost:48626/shelf/geolight]]. Movies have been added to {{{PaintingWithLight/}}}, showing the results of each scene.
''VSFX 360: Stereoscopic Imaging''

Jump to notes for class [[1|VSFX 360: Class 1]], [[2|VSFX 360: Class 2]], [[3|VSFX 360: Class 3]], [[4|VSFX 360: Class 4]], [[5|VSFX 360: Class 5]], [[6|VSFX 360: Class 6]], [[7|VSFX 360: Class 7]], [[8|VSFX 360: Class 8]], [[9|VSFX 360: Class 9]], [[10|VSFX 360: Class 10]], [[11|VSFX 360: Class 11]], [[12|VSFX 360: Class 12]], [[13|VSFX 360: Class 13]], [[14|VSFX 360: Class 14]], [[15|VSFX 360: Class 15]], [[16|VSFX 360: Class 16]], [[17|VSFX 360: Class 17]], [[18|VSFX 360: Class 18]], [[19|VSFX 360: Class 19]], [[20|VSFX 360: Class 20]]; [[Open all in new tab|index.html#%5B%5BVSFX%20360%5D%5D%20%5B%5BVSFX%20360%3A%20Class%201%5D%5D%20%5B%5BVSFX%20360%3A%20Class%202%5D%5D%20%5B%5BVSFX%20360%3A%20Class%203%5D%5D%20%5B%5BVSFX%20360%3A%20Class%204%5D%5D%20%5B%5BVSFX%20360%3A%20Class%205%5D%5D%20%5B%5BVSFX%20360%3A%20Class%206%5D%5D%20%5B%5BVSFX%20360%3A%20Class%207%5D%5D%20%5B%5BVSFX%20360%3A%20Class%208%5D%5D%20%5B%5BVSFX%20360%3A%20Class%209%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2010%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2011%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2012%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2013%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2014%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2015%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2016%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2017%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2018%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2019%5D%5D%20%5B%5BVSFX%20360%3A%20Class%2020%5D%5D]]

!!!!Assignments
*[[Head shot]]
*[[Project 1: Photographic explorations|VSFX 360: Photography project]]
*[[Film critiques|VSFX 360: Stereoscopic movie critiques]]
*[[Project 2: Conversion|VSFX 360: Conversion project]]
!!!!Resources
*[[Stereoscopic resources|Stereoscopic: Links]]
*[[Houdini resources|Houdini: Links]]
*[[Maya resources|Maya: Links]]
*[[Look development resources|Look development: Notes]]
*[[Visual resources]]

!!!!Stereoscopic glasses
If you need to purchase stereoscopic glasses (just about any type and configuration), [[Rainbow Symphony|http://www.rainbowsymphony.com/]] is your source. They also will send you a free pair of glasses if you send them a self-addressed, stamped envelope.
Before class 2, you should have a [[head shot|Head shot]] in place in the drop box. Gold stars if you make it stereoscopic. ;-)

The preliminary specifications for [[the first project can be found here|VSFX 360: Photography project]]. Before class 2, you should experiment with shooting some still-image, stereoscopic pairs, preferably with a variety of cameras.

//Foundations of the Stereoscopic Cinema//, by Lenny Lipton, can be downloaded [[here|http://www.andrewwoods3d.com/library/foundation.cfm]] and copies have been placed in _MATERIAL. Before class 3, you should read the preface, along with chapters 1 and 2.
!!!!Anaglyphs
*With the anaglyphic glasses, the red lens goes over the left eye and the blue lens over the right eye.
*To combine a stereoscopic pair into an anaglyph, keep the red channel from the left-eye image, the blue and green channels from the right-eye image.
''Screening:'' I will be showing //Alice in Wonderland// on Saturday, 7 May, 1:00 p.m., in room 221. Seating will be limited to 20 people. We also will be reviewing selected scenes from the movie in a future class. Attendance is not mandatory, but you could use this as one of your [[film critiques|VSFX 360: Stereoscopic movie critiques]].
''Of interest:'' A $100 DIY beamsplitter rig &mdash; [[Part 1|http://vimeo.com/22862540]] and [[Part 2|http://vimeo.com/23287241]]
In connection with our discussion of floating windows last class, I added {{{FloatingWindow_Example2.psd}}} to _MATERIAL.

The ''third project'' (creation of a floating window rig in //Maya//, //Houdini//, and/or //Nuke//) should be completed by next class (Tuesday, 17 May).

If you have not sent me a proposal for the ''final project'', you should do so before next class (Tuesday, 17 May).
For the ''third project,'' please submit the following:
*{{{LastnameFirstname_P3/}}} &mdash; a directory containing your submission.
*{{{LastnameFirstname_P3.ext}}} &mdash; the //Maya//, //Houdini//, or //Nuke// scene file containing your rig.
*{{{LastnameFirstname_P3.mov}}} &mdash; a ~QuickTime movie, high-quality H.264 compression, 1280x720 pixels, minimum. The movie should be stereoscopic (anaglyph) and should show the use of your rig in a scene. UPDATE: This should be a rendered animation showing the effect of the floating window/floating crop. It does not need to be a screen capture demonstration of the use of the rig within the application.
//New York Times//: [[3-D Starts to Fizzle, and Hollywood Frets|http://www.nytimes.com/2011/05/30/business/media/30panda.html]] &mdash; Maybe people just don&rsquo;t want to pay to see bad movies?!?
The [[specification for the first project|VSFX 360: Photography project]] has been updated with additional details to follow.

Of interest:
*[[Lenny Lipton|http://lennylipton.com/]]&rsquo;s [[blog|http://lennylipton.wordpress.com/]].
*Charles Wheatstone&rsquo;s [[1838|http://www.stereoscopy.com/library/wheatstone-paper1838.html]] and [[1852|http://www.stereoscopy.com/library/wheatstone-paper1852.html]] papers. Both papers also are included in a single PDF in _MATERIAL.
!!!!Project 4 submissions
The fourth project should be submitted in a directory, {{{LastnameFirstname_P4/}}}. As each of you has a unique project, you should determine your own internal naming convention inside that directory and should submit whatever materials you feel best document the work that you did for the project.

For most projects, at minimum, you should submit three video files, one H264 anaglyph and separate full-color H264 videos for the left and right eyes.

Each individual project may have additional requirements that we will discuss in person.
The painting-with-light project that I mentioned last class: //[[12:31|http://www.project1231.com/]]// by Croix Gagnon and Frank Schott. Makes me wonder how I might implement something like this in //Houdini//? Volumetric data stretched along a path&hellip;hmmmm&hellip;

Some readings:
*[[Why 3D movies could be so much more|http://blogs.forbes.com/markchangizi/2011/03/25/why-3d-movies-could-be-so-much-more/]]
*[[Seeing Through Yourself: The Fundamental Reason For Binocular Vision|http://changizi.wordpress.com/2011/03/25/seeing-through-yourself-the-fundamental-reason-for-binocular-vision/]]
*[[3D Movies Are Missing the Point...Of View|http://www.psychologytoday.com/blog/nature-brain-and-culture/201012/3d-movies-are-missing-the-pointof-view]]
*[[Why are 3D movies like Avatar such fun?|http://blogs.telegraph.co.uk/technology/markchangizi/100004473/why-are-3d-movies-like-avatar-such-fun/]]
The [[specification for the first project|VSFX 360: Photography project]] has been finalized.

''Assignment:'' [[Film critiques|VSFX 360: Stereoscopic movie critiques]]

''Screening:'' I will be showing //Coraline// on Saturday, 9 April, 2:00 p.m., in room 221. Seating will be limited to 20 people. We also will be reviewing selected scenes from the movie in a future class. Attendance is not mandatory, but you could use this as one of your [[film critiques|VSFX 360: Stereoscopic movie critiques]].
!!!!Distractions
*[[Over 8,000 digitized stereocards at the U.S. Library of Congress.|http://www.loc.gov/pictures/collection/stereo/about.html]] Oh my.
*Chris B. pointed out that one of my favorite web comics, //[[xkcd|http://xkcd.com/]]//, has &ldquo;[[gone 3D|http://xk3d.xkcd.com/]]&rdquo;.
''Deadline change:'' [[Project 1|VSFX 360: Photography project]] is now due class 6, instead of class 5.

If you need to purchase stereoscopic glasses (just about any type and configuration), [[Rainbow Symphony|http://www.rainbowsymphony.com/]] is your source. They also will send you a free pair of glasses if you send them a self-addressed, stamped envelope.
[[Project 1|VSFX 360: Photography project]] is now due by the end of today&rsquo;s class. If you make any modifications to your submission after 1:30 p.m. today, you should email me to be sure that I evaluate the latest version of your work.
We are working on setting up stereoscopic painting-with-light sessions this weekend&hellip;details to follow.
The specification for the [[second project|VSFX 360: Conversion project]] has been added. You should work to have final versions of the image ready for next class.

!!!!Final project proposals
Before class 11, you should email a proposal for your final project to --khuff@scad.edu--.

Please using this subject line: {{{VSFX 360 Final Project Proposal}}}

Be as specific as necessary to convey the intent and techniques of your proposed project.
With this project, you will explore and document concepts related to converting monoscopic images to stereoscopic.

You will be creating ''two'' sets of images, each including:
*A photographed left and right pair
*A reconstructed right image &mdash; starting from a copy of the left eye image, create an appropriate right-eye image
*Anaglyphic versions of both pairs of images
You are expected to minimize retinal rivalry, striving to eliminate it altogether.
!!!!Submission requirements
You will be providing the following for each of the images you create:
*An anaglyphic version saved as a PNG file. These can either be false-color or &ldquo;monochromatic&rdquo;.
*Two independent images, one each for the left and right eyes, saved as TIFF, PNG or TARGA files.
The dimensions of the images should be 1,920 by 1,080 pixels. You may use aspect ratios other than 16:9, in which case you should matte the image using black.

You will be creating a simple HTML document which will show the anaglyphic images and indicate which concept the images are meant to illustrate.
*The concepts  should be presented in the order listed above (as headings) which the appropriate images shown below each heading.
*You may use [[this example|inclusions-2010-fall/TECH311-SampleReferencePage.html]] as a preliminary general guide of minimum formatting.
*For the {{{<img>}}} tags in the HTML code, set the {{{width}}} and {{{height}}} attributes to 960 and 540, respectively (i.e., a 50% reduction).
!!!!Naming conventions and directory structure
Create the following two directories in your drop box:
*{{{LastnameFirstname_P2/}}} &mdash; Current versions of images go here.
*{{{LastnameFirstname_P2/previous_versions/}}} &mdash; Previous versions of images go here. For example, if you make changes based on feedback, place the previous versions of the image in this directory, keeping the current version of the image at the top level of the project directory.
Images should be saved with the following naming convention: {{{LastnameFirstname_P2_i###_v##_[original/converted]_[L/R/A].ext}}}, where
*{{{i###}}} &mdash; Zero-padded sequential number for each unique image you are submitting
*{{{v##}}} &mdash; Zero-padded sequential number for each version of the image
*{{{[original/converted]}}} &mdash; Indicating the original versions and the converted versions.
*{{{[L/R/A]}}} &mdash; Left eye/right eye/anaglyph
*{{{.ext}}} &mdash; An appropriate file extension
Example: {{{HuffKen_P1/HuffKen_P2_i002_v01_original_L.tga}}} would be the original left eye for the second version of the first image submitted.

Even though you should not modify the left-eye image, please include a copy of the original left-eye image as the converted left-eye image.

Unless specifically requested, you only should submit the rectified versions of your images, not the original photographs.

The HTML review document should be saved as {{{LastnameFirstname_P2/LastnameFirstname_Review.html}}} and always should reference the latest anaglyph versions of your images. Double check your HTML code to confirm that you have relative file references in your {{{<img>}}} tags.

''Deadline:'' This project is scheduled to be due class 10.
With this project, you will explore and document concepts related to stereoscopic imaging through photography.

To start, before class 2, you should experiment with shooting some basic, still-image, stereoscopic pairs, with a variety of cameras, if possible.

You will be creating and providing a //minimum// of three stereoscopic image pairs that illustrate each of the following concepts.
*Each of the psychological depth cues:
**Linear perspective
**Aerial perspective (atmospheric perspective)
**Interposition (occlusion)
**Texture gradient
**Retinal angle
**--Motion parallax-- (this will be deferred to later projects, but if you can or want to work with some moving footage&hellip;)
*Depth of field (if you are working with a point-and-shoot camera, you may need to use the camera&rsquo;s macro mode in order to be able to acquire any siginificant depth of field blurring)
*Positive/zero/negative parallax (screen space/screen/viewer space)
*Hyperstereo/orthostereo/hypostereo (these can/should? be of the same subject for easy comparison)
*Reflections/specular highlights
*Transparency
*Time-of-day/varying lighting conditions
*Other &mdash; Painting with light, long exposures, time-lapse, {{{____}}} (fill in the blank) &mdash; 5 additional images, minimum, that show artistic and/or technical exploration
For a given concept the image pairs should be different, but you may reuse a given image pair //twice// for different concepts.

You are expected to minimize retinal rivalry, striving to eliminate it altogether. Be aware of edge violations, especially those that cause uncomfortable pinning of the image.
!!!!Submission requirements
You will be providing the following for each of the images you create:
*An anaglyphic version saved as a PNG file. These can either be false-color or &ldquo;monochromatic&rdquo;. (We need a format that will not clobber the color of the anaglyphs but which also is easily viewable in an HTML file.)
*Two independent images, one each for the left and right eyes, saved as TIFF, PNG or TARGA files.
The dimensions of the images should be 1,920 by 1,080 pixels. You may use aspect ratios other than 16:9, in which case you should matte the image using black.

You will be creating a simple HTML document which will show the anaglyphic images and indicate which concept the images are meant to illustrate.
*The concepts  should be presented in the order listed above (as headings) which the appropriate images shown below each heading.
*You may use [[this example|inclusions-2010-fall/TECH311-SampleReferencePage.html]] as a preliminary general guide of minimum formatting.
*For the {{{<img>}}} tags in the HTML code, set the {{{width}}} and {{{height}}} attributes to 960 and 540, respectively (i.e., a 50% reduction).
!!!!Naming conventions and directory structure
Create the following two directories in your drop box:
*{{{LastnameFirstname_P1/}}} &mdash; Current versions of images go here.
*{{{LastnameFirstname_P1/previous_versions/}}} &mdash; Previous versions of images go here. For example, if you make changes based on feedback, place the previous versions of the image in this directory, keeping the current version of the image at the top level of the project directory.
Images should be saved with the following naming convention: {{{LastnameFirstname_P1_i###_v##_[L/R/A].ext}}}, where
*{{{i###}}} &mdash; Zero-padded sequential number for each unique image you are submitting
*{{{v##}}} &mdash; Zero-padded sequential number for each version of the image
*{{{[L/R/A]}}} &mdash; Left eye/right eye/anaglyph
*{{{.ext}}} &mdash; An appropriate file extension
Example: {{{HuffKen_P1/HuffKen_P1_i014_v02_L.tga}}} would be the left eye for the second version of the fourteenth image submitted.

Unless specifically requested, you only should submit the rectified versions of your images, not the original photographs.

The HTML review document should be saved as {{{LastnameFirstname_P1/LastnameFirstname_Review.html}}} and always should reference the latest anaglyph versions of your images. Double check your HTML code to confirm that you have relative file references in your {{{<img>}}} tags.

''Deadline:'' This project is scheduled to be due class 5.
Over the course of the quarter, you should attend screenings of at least two stereoscopic movies. After viewing the movie, write a brief critique, focusing on the implementation of stereoscopic imaging in the film. Describe scenes or situations that you found particularly memorable or effective. Also describe scenes or elements which you found problematic in the film.

Send the text of the critique as the body of a email (not an attachment) to --khuff@scad.edu-- before the end of the quarter.

Please use the following as the subject of the email: {{{Film critique: Name of film}}}.

Here are some stereoscopic films due to be released during the quarter:

|Rio|15 April|
|Cave of forgotten dreams|29 April|
|Thor|6 May|
|Priest|13 May|
| Pirates of the Caribbean: On Stranger Tides|20 May|
|Kung Fu Panda 2|26 May|

Additionally, //Coraline//, //Tangled// and //Alice in Wonderland// will be screened in room 221, outside of class, at some points during the quarter.
''VSFX 424: Digital Visual Effects II''

Jump to notes for class [[1|VSFX 424: Class 1]], [[2|VSFX 424: Class 2]], [[3|VSFX 424: Class 3]], [[4|VSFX 424: Class 4]], [[5|VSFX 424: Class 5]], [[6|VSFX 424: Class 6]], [[7|VSFX 424: Class 7]], [[8|VSFX 424: Class 8]], [[9|VSFX 424: Class 9]], [[10|VSFX 424: Class 10]], [[11|VSFX 424: Class 11]], [[12|VSFX 424: Class 12]], [[13|VSFX 424: Class 13]], [[14|VSFX 424: Class 14]], [[15|VSFX 424: Class 15]], [[16|VSFX 424: Class 16]], [[17|VSFX 424: Class 17]], [[18|VSFX 424: Class 18]], [[19|VSFX 424: Class 19]], [[20|VSFX 424: Class 20]]; [[Open all in new tab|index.html#%5B%5BVSFX%20424%5D%5D%20%5B%5BVSFX%20424%3A%20Class%201%5D%5D%20%5B%5BVSFX%20424%3A%20Class%202%5D%5D%20%5B%5BVSFX%20424%3A%20Class%203%5D%5D%20%5B%5BVSFX%20424%3A%20Class%204%5D%5D%20%5B%5BVSFX%20424%3A%20Class%205%5D%5D%20%5B%5BVSFX%20424%3A%20Class%206%5D%5D%20%5B%5BVSFX%20424%3A%20Class%207%5D%5D%20%5B%5BVSFX%20424%3A%20Class%208%5D%5D%20%5B%5BVSFX%20424%3A%20Class%209%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2010%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2011%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2012%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2013%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2014%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2015%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2016%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2017%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2018%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2019%5D%5D%20%5B%5BVSFX%20424%3A%20Class%2020%5D%5D]]

!!!!Assignments
* --[[Head shot]]-- (No headshot is required for this class.)
*[[Projects|VSFX 424: Projects assignment]]
*[[Fireflies exercise|VSFX 424: Fireflies assignment]]
*[[Seal bubble trail exercise|VSFX 424: Seal bubble trail assignment]]
*[[nParticle exploration exercise|VSFX 424: nParticle exploration assignment]]
*[[Giant bubble exercise|VSFX 424: Giant bubble assignment]]

[[Maya resources|Maya: Links]]
--Before class 2, you should have a [[head shot|Head shot]] in place in the drop box.-- (Never mind.)

Start reading the //Nucleus in Autodesk Maya Whitepaper,// available [[here|http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=13583699]] or in dropbox/_MATERIAL.
[[Project 1|VSFX 424: Projects assignment]] is due at the end of today&rsquo;s class.

We will be reviewing the preview versions of the [[nParticle exploration exercise|VSFX 424: nParticle exploration assignment]] today.

Referring to an earlier discussion of rendering per-vertex color data, it is possible with the //mentalrayVertexColors// node. The workflow is described in the //Maya 2011// documentation at //User Guide -> Rendering and Render Setup -> Shading -> mental ray for Maya Shading -> Basics of mental ray for Maya shading -> Render color per vertex in mental ray for Maya//. (I told you it was a long path.)
''Mid-term conferences:'' You should have received an email over the weekend regarding mid-term conferences.
''Assignment:'' [[Giant bubble exercise|VSFX 424: Giant bubble assignment]]. We will look at review videos in class 13 and final versions in class 14.

Duncan Brinsmead&rsquo;s [[blog|http://area.autodesk.com/blogs/duncan]] was mentioned in connection with Nucleus-related topics. The blog also is mentioned in [[Maya resources|Maya: Links]].

The following files have been added to _MATERIAL:
*{{{nParticles_ScriptedCollisionEvents}}} project &mdash; the particle collision example from last class where colliding particles birthed new particle with blended colors from the colliding particles.
*{{{rndPoints.mel}}} &mdash; a MEL script by Dirk Bialluch for randomizing components.
I will giving [[a presentation to the Digital Media Club this Wednesday.|Codename: Stonehenge]]
[[Two extra help sessions have been scheduled.|Extra help sessions]]

On the ~CGSociety forums, there is [[a very, very long thread|http://forums.cgsociety.org/showthread.php?f=86&t=155001]] dealing with //Maya// Fluid Effects. Lots o&rsquo; great information here (there are frequent posts by Duncan Brinsmead), but it is a long read. Be sure to read it in order. If covers Fluids back many versions and some of the information changes over time.
The [[specifications for project submissions|VSFX 424: Projects assignment]] have been posted. For those of you who already have submitted project work, please check the specifications and revise your submissions accordingly.

Of interest: [[Peter Shipkov|http://petershipkov.com/]] has created a number of very interesting and powerful toolsets and workflows for //Maya//. Two favorites of mine: //[[SOuP|http://petershipkov.com/development/SOuP/SOuP.htm]]// (adds a large number of procedural tools to //Maya//) and //[[Overburn|http://petershipkov.com/development/overburn/overburn.htm]]// (uses particles and fluids to create very detailed and potentially realistic volumetric effects)

I have added [[Maya: Toggling the update of render thumbnails]] (a note and MEL script). I also placed a copy of the script in _MATERIAL. (Thank you, Jared.)

{{{FluidEffects_ExpressionOnInternalDirectionalLight.ma}}} has been added to _MATERIAL. It contains the MEL/Python expression I demonstrated last class. The expression ties the rotation of a directional light into the //fluid.directionalLight// vector orientation attribute.

The easiest way to view the expression is to bring up the Expression Editor (Windows -> Animation Editors -> Expression Editor), change the //Select Filter// menu to //By Expression Name//. You then should see the //directionalLightFluidExpression// in the Objects/Selection list.

''Note'' that this scene assumes that you have imported ~PyMEL with the following:
{{{
from pymel.core import *
}}}
Either in a Python tab of the Script Editor, the Command Line in Python mode or by adding the following to your {{{maya/}}}//{{{version}}}//{{{/scripts/userSetup.mel}}} file:
{{{
python("from pymel.core import *");
}}}

~PyMEL is standard with //Maya 2011// or is available for download [[here|http://code.google.com/p/pymel/]] for earlier versions of //Maya//.
[[Maya resources|Maya: Links]] has been updated.

!!!!Readings
Before class 3 you should read the 1983 SIGGRAPH paper by William Reeves and the 1990 paper by Karl Sims in _MATERIAL and finish reading the //Nucleus in Autodesk Maya Whitepaper.// The behind-the-scenes video for the digital effects in //Star Trek II: Wrath of Kahn// can be found [[here|http://www.youtube.com/watch?v=Qe9qSLYK5q4]]. Karl Sim’s //Particle Dreams// can be found [[here|http://www.karlsims.com/particle-dreams.html]].

[[Maya: Particle expressions]] &mdash; Some notes on execution of particle expressions in Maya

You also should review the following sections in the Maya documentation:
*User&rsquo;s Guide > Dynamics and Effects > Dynamics > Particles
*User&rsquo;s Guide > Dynamics and Effects > Dynamics > Fields
*User&rsquo;s Guide > Dynamics and Effects > Dynamics > Dynamics Nodes > Particle nodes
* Node documentation for particleShape and particleSamplerInfo

Assignment: [[Fireflies exercise|VSFX 424: Fireflies assignment]]
//[[How to fold a bunny|http://www.youtube.com/watch?v=GAnW-KU2yn4]]// or //Bunny attacked by alligator (clips)//.

Project 2 is due at the end of today&rsquo;s class.
The submission criteria for the [[Fireflies exercise|VSFX 424: Fireflies assignment]] have been updated.

Before next class, you should send me an e-mail describing your current plan for your two projects.

As discussed in class, the following scripts have been added to _MATERIAL:
*{{{fit.mel}}}
*{{{removeInitialState.mel}}}

You should review the following sections in the Maya documentation:
*User&rsquo;s Guide > General > MEL and Expressions > Particle Expressions
The final version of your [[fireflies exercise|VSFX 424: Fireflies assignment]] should be in the drop box today.

In the Maya documentation, per-particle field attributes are mentioned at:
*User&rsquo;s Guide > Dynamics and Effects > Dynamics > Fields > Work with fields >  Work with per-particle field attributes

Assignment: [[Seal bubble trail exercise|VSFX 424: Seal bubble trail assignment]]

!!!!Removing initial states from particle systems
As a follow-up to {{{removeInitialState.mel}}} file, Joey mentioned that he has a two-line version. This turned out to be a snippet of MEL code that would clear the initial state from //position0// and //lifespan0//. This assumes the particle system is the current selection and only clears the two mentioned per-particle initial state attributes:
{{{
setAttr ".position0" -type "vectorArray" 0;
setAttr ".lifespanPP0" -type "doubleArray" 0;
}}}
The {{{removeInitialState.mel}}} script is more generalized, will work on multiple particle systems (selected) at the same time and will clear all initial state attributes, including those added by the user.
You should have the preview movie available for review for the [[seal bubble trail exercise|VSFX 424: Seal bubble trail assignment]] by today&rsquo;s class.

{{{fitclamp.mel}}} has been added to _MATERIAL. This variation on the fit function clamps the result value to the output range, matching the behavior of the the ~HScript //fit()// function in //Houdini//.
You should have the final movie submitted for the [[seal bubble trail exercise|VSFX 424: Seal bubble trail assignment]] by today&rsquo;s class.

I have added a new resource page: [[Visual resources]] &mdash; A collection of links to sites with deep and/or broad visual references.

Major updates to [[Maya resources|Maya: Links]] and [[Look development resources|Look development: Links]].

----
''Makeup class:'' [[Class 8|VSFX 424: Class 8]] will be a makeup class on Friday, 8 October, 10:30 a.m., Room 206 (our normal room). This will be a project review and troubleshooting session &mdash; an opportunity to receive feedback and assistance with your first project.
----
''Assignment:'' [[nParticle exploration exercise|VSFX 424: nParticle exploration assignment]] &mdash; you should be prepared to present your preview movies next class, class 8, 8 October.

!!!!Readings
You should review the nParticles documentation, including the nParticleShape and nucleus node documentations.

----
''Makeup class:'' [[Class 8|VSFX 424: Class 8]] will be a makeup class on Friday, 8 October, 10:30 a.m., Room 206 (our normal room). This will be a project review and troubleshooting session &mdash; an opportunity to receive feedback and assistance with your first project.
----
--We will be reviewing the preview movies for the [[nParticle exploration exercise|VSFX 424: nParticle exploration assignment]] today.--

The following items have been added to _MATERIAL:
*{{{VariantMELScriptsKAH/}}} &mdash; The transform randomization and random shader assignment scripts I use in my work and which were demonstrated earlier this week.
*{{{djRivet.zip}}} &mdash; The //djRivet// script by [[David Johnson|http://www.djx.com.au/blog/]], originally downloaded from [[here|http://www.djx.com.au/blog/downloads/]], and used in the location-based shading network demonstration from earlier in the week.
*{{{dynamicFollow.mel}}} and {{{cameraFollow.mel}}} &mdash; //dynamicFollow.mel// creates an expression (which includes dampening and drag) the causes one object to follow another (select the leader object first, then the follower); //cameraFollow.mel// uses //dynamicFollow.mel// to create that relationship between the current selection and the first renderable perspective camera; probably best to use the //dynamicFollow.mel// directly, especially if you have multiple cameras in your scene. 

----
''Makeup class:'' This will be a makeup class on Friday, 8 October, 10:30 a.m., Room 206 (our normal room). This primarily will be a project review and troubleshooting session &mdash; an opportunity to receive feedback and assistance with your first project.
----
If you are have the problem of your editor windows disappearing from //Maya 2011// when you switch between single- and dual-monitor workstations, I have posted [[a fix here.|TECH 311: Class 7]]

We will be reviewing the preview versions of the [[nParticle exploration exercise|VSFX 424: nParticle exploration assignment]] today.
Using the firefly example we worked on in class, improve the motion of the simulation and the look. Your goal is to produce a realistic/naturalistic animation of 5&ndash;10 seconds. You should experiment with alternate particle render types.

Here are a couple of videos to serve as a jump off point for reference: [[one|http://www.youtube.com/watch?v=9OJpcBGPSEs]] and [[two|http://www.youtube.com/watch?v=YNPg9f4El6M]]. And [[here is a lovely set of long-exposure photographs.|http://quit007.deviantart.com/gallery/#Fireflies]] Do not limit yourself to these references and if you find more/better references, please let me know. You will be documenting your references in the submission, so keep track of """URLs""".

You will be preparing a review video for Class 3 and should work to have the final version completed by Class 4.

!!!!Submissions
Create a directory, {{{LastnameFirstname_Fireflies}}}, in your drop box. Include the following:
*{{{reference/}}}, a directory containing up to 20 reference images
*{{{reference/links.txt}}}, a text file containing links to any moving image references you might have found for the exercise
*{{{LastnameFirstname_Fireflies_Preview.mov}}}, a """QuickTime""" movie with your preview animation, H.264 compression, 1280x720, 24 or 30 frames-per-second.
*{{{LastnameFirstname_Fireflies.mov}}}, a """QuickTime""" movie with your final animation, H.264 compression, 1280x720 or 1920x1080, 24 or 30 frames-per-second.
*{{{LastnameFirstname_Fireflies/}}}, a Maya project directory containing your final scene. You should remove any empty subdirectories from the project, any rendered images and any temporary files. Do not include cache files beyond initial state caches. Be sure to remove any preliminary files and their related caches. You should check to see that there are no significant error messages when opening the Maya scene file after you have created the submission version of the project
*{{{readme.txt}}}, a text-only file which contains any information you think I should know while evaluating your submission.

Movies should be well compressed, meaning that they should be as small as possible without degrading image quality in any visually significant manner.
Based on the references below, create a simulation of a giant soap bubble. Your goal is to produce a realistic/naturalistic animation of 15&ndash;20 seconds. 

In addition to the bubble form, your work should include at least one of the following:
*Look development of the soap film
*Creating the string-on-sticks bubble wands and having your bubble growing from the string
*Animating/simulating the bubble popping

!!!!References
[[One|http://www.youtube.com/watch?v=3i-zYdOPG2k]] (also in _MATERIAL), [[two|http://www.youtube.com/watch?v=d9aW55jRJYY]], [[three|http://www.youtube.com/watch?v=oS8P0YNHMTs]], [[four|http://vimeo.com/12838882]], [[five|http://vimeo.com/12355264]], [[six|http://vimeo.com/5877626]] and [[seven|http://vimeo.com/5887151]]. [[And some photographs as well.|http://www.flickr.com/photos/slaioo/tags/bubble/]] Okay, I&rsquo;ll stop now.

The //Wikipedia// page for [[soup bubbles|http://en.wikipedia.org/wiki/Soap_bubble]] is a good jump off point and [[this site|http://www.soapbubble.dk/en/bubbles/]] as some overview information as well.

!!!!Submissions
Create a directory, {{{LastnameFirstname_GiantBubble}}}, in your drop box. Include the following:
*{{{LastnameFirstname_GiantBubble_Preview.mov}}}, a """QuickTime""" movie with your preview animation, H.264 compression, 1280x720, 24 or 30 frames-per-second.
*{{{LastnameFirstname_GiantBubble.mov}}}, a """QuickTime""" movie with your final animation, H.264 compression, 1280x720 or 1920x1080, 24 or 30 frames-per-second.
*{{{LastnameFirstname_GiantBubble/}}}, a Maya project directory containing your final scene. You should remove any empty subdirectories from the project, any rendered images and any temporary files. Do not include cache files beyond initial state caches. Be sure to remove any preliminary files and their related caches. You should check to see that there are no significant error messages when opening the Maya scene file after you have created the submission version of the project
*{{{readme.txt}}}, a text-only file which contains any information you think I should know while evaluating your submission.

Movies should be well compressed &mdash; as small as possible without degrading image quality in any visually significant manner.
Before the end of the quarter, you will be completing two major projects involving the subject taught in the course. The subject matter and the exact combination of techniques is at your discretion with possible input from and modifications by the professor.

!!!!Project submissions
//Below, substitute appropriate numbers for// {{{#}}}.

Create a directory, {{{LastnameFirstname_Project#}}}, in your drop box. Include the following:
*{{{LastnameFirstname_Project1.mov}}} &mdash; """QuickTime""" movies with your final animation, H.264 compression, 1280x720 or 1920x1080, 24 or 30 frames per second.
**The movie should contain at least 5&ndash;10 seconds of rendered animation.
**The movies should include a technical breakdown, the exact nature of which is at your discretion.
**Movies should be well compressed, meaning that they should be as small as possible without degrading image quality in any visually significant manner.
*{{{LastnameFirstname_Project#_Still#.png}}} &mdash; At least one still image per project, at least 1800 pixels on its largest dimension, saved as a PNG file, without alpha channels. These should be &ldquo;hero shots&rdquo; of your project, well composed to show the complexity, detail and development.
*{{{maya/}}} &mdash; A Maya project directory containing your final scene. You should remove any empty subdirectories from the project, any rendered images and any temporary files. Do not include cache files beyond initial state caches. Be sure to remove any preliminary files and their related caches. You should check to see that there are no significant error messages when opening the Maya scene file after you have created the submission version of the project. Include only your final version of the scene, preferably in Maya ASCII format.
*{{{readme.txt}}} &mdash; An optional text-only file which contains any information you think I should know while evaluating your submission.
**If your project was created in collaboration with another student (in the class or otherwise) or if you are submitting some of all of the project to another class, you should make note of these facts in this file.
*{{{concept/}}} &mdash; A directory containing any concept artwork you may have produced for the project. Flatten any layered Photoshop files.
*{{{reference/}}} &mdash; A directory containing reference images and movies. Images should be saved as ~JPEGs. Reference movies should be well compressed. There is no set naming convention for the files in this directory, but the files should be well organized.
Based on the reference movie, //~HuffKA-SealBubbleTrail-AquariumOfThePacific-2010-07-29-MVI_1273.mov// in _MATERIAL, create a simulation of the trail of bubbles that follow a diving seal. Your goal is to produce a realistic/naturalistic animation of 15&ndash;20 seconds. Your primary tool for this process likely will be one or more volume axis curve fields along with an animated curve (either hand-animated or simulated).

You will be preparing a review video for Class 5 and should work to have the final version completed by Class 6.

!!!!Submissions
Create a directory, {{{LastnameFirstname_SealTrail}}}, in your drop box. Include the following:
*{{{LastnameFirstname_SealTrail_Preview.mov}}}, a """QuickTime""" movie with your preview animation, H.264 compression, 1280x720, 24 or 30 frames-per-second.
*{{{LastnameFirstname_SealTrail.mov}}}, a """QuickTime""" movie with your final animation, H.264 compression, 1280x720 or 1920x1080, 24 or 30 frames-per-second.
*{{{LastnameFirstname_SealTrail/}}}, a Maya project directory containing your final scene. You should remove any empty subdirectories from the project, any rendered images and any temporary files. Do not include cache files beyond initial state caches. Be sure to remove any preliminary files and their related caches. You should check to see that there are no significant error messages when opening the Maya scene file after you have created the submission version of the project
*{{{readme.txt}}}, a text-only file which contains any information you think I should know while evaluating your submission.

Movies should be well compressed, meaning that they should be as small as possible without degrading image quality in any visually significant manner.
For this assignment, you will be exploring nParticles in //Maya//. There is no restriction on subject matter or technique, beyond the required use of nParticles.
Your goal is to produce two interesting, animated effects, each 5&ndash;20 seconds in length.

After we review your two previews, one will be selected to be taken to final form.

You will be preparing a review video for Class 8 and should work to have the final version completed by Class 9.

!!!!Submissions
Create a directory, {{{LastnameFirstname_nParticles}}}, in your drop box. Include the following:
*{{{LastnameFirstname_nParticles_Preview.mov}}}, a """QuickTime""" movie with your preview animation, H.264 compression, 1280x720, 24 or 30 frames-per-second.
*{{{LastnameFirstname_nParticles.mov}}}, a """QuickTime""" movie with your final animation, H.264 compression, 1280x720 or 1920x1080, 24 or 30 frames-per-second.
*{{{LastnameFirstname_nParticles/}}}, a Maya project directory containing your final scene. You should remove any empty subdirectories from the project, any rendered images and any temporary files. Do not include cache files beyond initial state caches. Be sure to remove any preliminary files and their related caches. You should check to see that there are no significant error messages when opening the Maya scene file after you have created the submission version of the project
*{{{readme.txt}}}, an optional text-only file which contains any information you think I should know while evaluating your submission.

Movies should be well compressed, meaning that they should be as small as possible without degrading image quality in any visually significant manner.
[img[VSFX Rules!|inclusions-2011-spring/VSFX_LightPainting.jpg]]

Thank you Craig, Steve, Nate and Megan (from left to right, skipping the weirdo that made the smiley face). This was well worth a few bug bites. Additional stereoscopic images to follow&hellip;
!!!!VLC
VLC can do screen captures. Who knew? There is a [[blog post here that describes the process.|http://www.paulhagon.com/apple/2009/07/27/using-vlc-for-screen-capture/]] The trick is using {{{screen://}}} as the stream specification.

The steps are shown for the Mac OS X version of VLC, but I have confirmed that it also works under Linux and would guess that it works under Windows as well. Now to figure out the best compression settings...

!!!!screentoaster.com
As an alternative, [[www.screentoaster.com|http://www.screentoaster.com/]] is a Flash-based, on-line screen capture utility. Freaks me out a bit that an on-line utility can capture my screen, but it works.
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'>Updated <span macro='view modified date'></span> by <span macro='view modifier link'></span></div>
<div class='kWarning' macro="showWhenTagged PreviousQuarter">This note has NOT been updated for Spring Quarter 2011.</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
Here are some sites that can serve as inspiration and reference for your projects.

Looking for an idea for a project? Pick any of the images from any of these sites and recreate the look. There you go.
*[[www.microscopyu.com|http://www.microscopyu.com/]] &mdash; A variety of images from a variety of microscopic techniques
*[[www.olympusbioscapes.com|http://www.olympusbioscapes.com/]] &mdash; An annual competition of microscopic photographic images from the life sciences. Look at all the pretty colors...
*[[www.umiushi.info|http://www.umiushi.info/]] &mdash; Sea slugs, oh my!
*[[www.radiolaria.org|http://www.radiolaria.org/]] &mdash; Single cell, aquatic organisms
*[[industrialdecay.blogspot.com|http://industrialdecay.blogspot.com/]] &mdash; Weekly postings of photographs taken in abandoned industrial sites (watch out for over processed HDR!)
*How about [[a 6.5 gigapixel photograph of an eagle feather|http://www.gigamacro.com/gigapixel_macro_photography_gallery_eagle_feather.php]]?
*[[Gelatinous life|http://www.youtube.com/watch?v=3HzFiQFFQYw]] &mdash; a beautiful introduction to the range of gelatinous lifeforms. Fabulous variety and complexity of form and motion.

//See also// [[Brain kibble|http://www.kennethahuff.com/blog/category/brain-kibble/]]
{{{Hello world.}}}

These notes are a way for me to share information and resources. The notes were developed initially during my time in the Visual Effects Department at  Savannah College of Art and Design.

//To ensure that you are viewing the latest version of these notes, refresh your browser//. {{kManicule{&#9758;}}} To see which notes have been updated recently, select the //index// link in the right sidebar and then the //Timeline// tab.

If you notice errors or dead links, please let me know. If you would like to contribute a resource to the notes, send me an [[email|mailto:ken@kennethahuff.com]] with the relevant information.

(Notes are maintained using a [[TiddlyWiki|http://www.tiddlywiki.com/]], a Wiki-like system contained in a single HTML file &mdash; everything...HTML, text, ~JavaScript, CSS...everything.)
<<tiddler SiteTitle>> &mdash; <<tiddler SiteSubtitle>>
In the context of my classes, the following are requirements. Outside of my classes, they are suggestions and are open to interpretation and variation.
*One- and two-letter variable names should be used only for shader writing (e.g., {{{Cd}}}, {{{P}}}, {{{N}}}, {{{Ka}}}, etc.), iterator variables (e.g., {{{i}}}, {{{j}}}, {{{k}}}, etc.) and &ldquo;eponymous&rdquo; data (e.g., {{{x}}}, {{{y}}}, {{{z}}}, {{{r}}}, {{{g}}}, {{{b}}}, {{{u}}}, {{{v}}}, etc.) Corollary: Do not use these well-established short variable names to refer to other kinds of data unless it is obvious in the context.
*If you variable represents a single object, the variable name should be singular. More than one object, plural. For example, {{{node}}} would represent a single node (e.g., a single ~PyMEL node presenting a camera&rsquo;s transform), where {{{nodes}}} would represent a collection of nodes (e.g., a Python list).
''IMPORTANT:'' I prepared this information while at SCAD. It was current as of the start of the Spring 2011 quarter. As I no longer have access to the SCAD network or set-up, I cannot guarantee that this information still is valid.

----

There is a mechanism for individualizing the Linux environment at SCAD on a user-by-user basis. This involves placing a specially-named text file, {{{bash_custom}}} (no file name extension!), in the user&rsquo;s network home directory. These directories are available at {{{~/mount/stuhome}}} for students and {{{~/mount/fachome}}} for faculty. These {{{bash_custom}}} files act like {{{.bashrc}}} or {{{.profile}}} files and are executed when a new Terminal window is launched. See {{{man bash}}} for more information.

Under Linux, we have two home directories, one local to the specific workstation and another which is our network home directory. If you use the command line {{{cd ~}}}, you will end up in the local home directory. If you make use of the local home directory, you should consider it temporary storage. You will end up with one of these local home directories on each Linux workstation that you use.

''Important:'' These configuration settings can change from quarter to quarter. Whenever there is an upgrade of the operating system or affected applications, you will need to confirm that these settings still are valid.

!!!!Example bash_custom for students
This example {{{bash_custom}}} sets up the following:
*Maya preferences (and scripts installed in the {{{maya/}}} directory) will follow you from workstation to workstation; this also makes the preferences cross-platform between Windows and Linux.
*Houdini version 11.0.639 will be the default version (instead of the 10.0.x version that is the standard default at SCAD); invoking {{{houdini}}} at the command line will start version 11.
*Houdini preferences will follow you (also Windows/Linux cross-platform).
{{{
# For Maya (as of/up to Maya 2011)
# define the Maya "home" directory which contains settings, scripts, etc.
export MAYA_APP_DIR=~/mount/stuhome/maya

# For Houdini (as of Houdini 11)
# Following two lines are standard setup for Houdini; the first would need to be updated if Houdini is updated
cd /opt/hfs11.0.639
source houdini_setup_bash
cd ~

# if the ~/houdini11.0 directory exists, delete it
if [ -d ~/houdini11.0 ];
then
    rm -f -r ~/houdini11.0
fi

# confirm the existence of ~/mount/stuhome/houdini11.0 , create if missing
if [ ! -d ~/mount/stuhome/houdini11.0 ];
then
    mkdir ~/mount/stuhome/houdini11.0
fi

# create a symbolic link to a network directory containing
# the Houdini 11 settings
ln -s ~/mount/stuhome/houdini11.0 ~/houdini11.0
}}}

If you would like //jEdit// preferences which follow you (Linux only), follow [[these instructions.|jEdit: Set-up at SCAD]]
''IMPORTANT:'' I prepared this information while at SCAD. It was current as of the start of the Spring 2011 quarter. As I no longer have access to the SCAD network or set-up, I cannot guarantee that this information still is valid.

----

The following steps will take you through the setup of jEdit under the Red Hat Linux environment at SCAD (specifically in Montgomery Hall). Once completed, your jEdit settings/preferences will follow you from workstation to workstation and you will have color coding of MEL scripts in jEdit. The jEdit application does not need to be installed as it is part of the standard setup and is available on all of the Linux workstations.

!!!!!Step 1: Establish a jEdit settings directory
Launch a Terminal window. Make a new directory on your network space for the settings (jEditSettings):
{{{
mkdir ~/mount/stuhome/jEditSettings
}}}

!!!!!Step 2: Modify your bash_custom file
Make the following addition to [[bash_custom]]:
{{{
# --- For jEdit ---
alias jedit='jedit -settings=~/mount/stuhome/jEditSettings'
}}}

If you already have a [[bash_custom]] file, add the code above to the file.

If you do not, you will need to create the file by saving the code above in a text file, {{{~/mount/stuhome/bash_custom}}}

There are [[some additional notes on the bash_custom file here.|bash_custom]]

You should be able to copy the code above and paste it into your file. The file should be saved as a plain text file without a file extension. Whenever a new Terminal shell is created, the [[bash_custom]] file automatically is executed. It replicates the behavior of a .bashrc file, if that is familiar to you. The file can be created using jEdit. 

If you would like to see other options that you could include in the above alias version of the {{{jedit}}} command, enter {{{jedit -usage}}} in a Terminal.

Quit jEdit and close the Terminal window.

!!!!!Step 3: Testing the new settings directory
Open a new Terminal window. Launch jEdit from the command line. Change one of the Global Options (Utilities menu -> Global Options...), such as Gutter: Line numbering: On. Exit jEdit and relaunch it from the command line. The preference that you set should have stuck.

!!!!!Step 4: Installation of Maya-specific features for jEdit
jEdit recognizes modules, called &ldquo;modes&rdquo;, which add language-specific features, such as syntax and keyword color coding. Modes specific to Maya&rsquo;s MEL and Python implementations can be downloaded from creativecrash.com (formerly highend3d.com). [[Here is a direct link to the download page.|http://www.creativecrash.com/maya/downloads/applications/syntax-scripting/c/jedit-mel-syntax-highlighting-mode]] You will need to register with the site in order to download the files. You also can find the modes files by search for &ldquo;jEdit&rdquo; on the site. A copy of the file, {{{mayaModes_2008.zip}}}, also is available in the _MATERIAL directory of the drop box during my scripting classes. As of this writing, the version on the site is labeled as specific to Maya 2008, but by my testing, it works well with Maya 2009&ndash;2011.

To install, decompress the downloaded file. Copy the three resulting files ({{{catalog}}}, {{{mel.xml}}} and {{{pythonMaya.xml}}}) to {{{~/mount/stuhome/jEditSettings/modes/}}}

If you already have installed additional modes, you will need to merge your existing {{{catalog}}} file with the additional catalog entries from the downloaded version.

To test the new modes, quit and restart jEdit. Open a *.mel file. You should see color coding. 

!!!!Additional notes on jEdit
While this process demonstrates the launch of jEdit from the command line, it can be launched by double-clicking, etc. and the settings will be used.

There are numerous plug-ins available for jEdit. Plug-ins typically are installed in the jEdit settings directory (in our case, {{{~/mount/stuhome/jEditSettings/}}}). 

jEdit knows which language mode to use based on the file extension. You will not see color coding until you have saved your file with an appropriate extension.

!!!Useful jEdit settings
The following settings are available in the Utilities –> Global Options… window:
*Editing: Folding mode: Indent
*Editing: Tab width: 4
*Editing: Indent width: 4
*Editing: Soft (emulated with spaces) tabs: On
*Gutter: Line numbering: On
*Plugin Manager: Install plugins in: jEdit settings directory
*Text Area: set font and colors to your personal preferences

!!!Troubleshooting jEdit
Occasionally, jEdit will lock up or not launch. Typically this happens after jEdit crashes and because a temporary jEdit preference file has survived when it should have been deleted automatically.

If you are having trouble launching jEdit, try the following:
{{{
rm ~/mount/stuhome/jEditSettings/server
}}}
This assumes that you followed the set up instructions above, resulting in your jEdit preferences being stored in {{{~/mount/stuhome/jEditSettings/}}}.

If this happens to you often enough that it gets annoying, you may want to add the following line to your {{{bash_custom}}}:
{{{
alias jeditfix='rm -f ~/mount/stuhome/jEditSettings/server'
}}}

As of Fall 2010, this problem does not seem to be happening as often (at all).