Here is a hand drawn target image for this texture and here is the source texture sample

Interactive Synthesis of Natural Textures

 This project is an extension of  Stanford work on texture synthesis (this nice site also has a collection of useful texture synthesis research links and links to texture sources) .

 We have come up with a way to modify Stanford algorithm so that it works better for a specific class of textures and runs more than an order of magnitude faster (1-2 sec. on 195 MHz R10000 to create a 500x500 image). The class of textures we are most interested in are those consisting of arrangements of small objects of familiar but irregular size, which are very common in nature and include flower fields, leafs, pebbles, tree branches, etc. Surprisingly, the algorithm also does a reasonable job on many textures is was not designed for, such as the one shown above.  Some examples can be found below.

 Our algorithm is simple to understand and implement but it it was never intended to work for an arbitrary texture. In particular, the original Stanford algorithm usually performs better on smooth textures, such as waves, clouds, etc. If you want to find out whether it works for your favorite texture, I would recommend to simply implement it - this should not take more than a few hours.

A method to provide the user with intuitive control over the synthesis process has also been developed. The user provides a target image and the algorithm attempts to create a texture with large-scale features governed by this target while preserving "texture-like" appearance.  The degree of conformance to the target is varied by the number of iterations. The image on the top of this page is an example of this process (two iterations were performed in this case, but usually it takes many more to get a pronounced pattern).  While writing text with textures is a fun application, the ability to modify particular area of the result image (poor man's shadows) or to provide some help to the algorithm in a difficult case by giving structural guidance is much more useful. Examples of this can be found below. Of course, for some textures user control works better than for others. Again, the easiest way to find out is to try it.

 Details of the algorithm are described in the following paper. Full resolution images from this paper (and they DO look much better large) are available in the example section below. I would also suggest to visit Stanford page which shows their results for these textures and for many others.

Paper from 2001 Symposium on Interactive 3D Graphics:

Texture Synthesis Examples (graphics intensive).

NEW (4/2/2001): Code

 Some people have asked me for the code. I'm putting this here but please be sure to understand that this is research code (politically correct term for complete mess). It has many things in it which are not used (a result of playing with different options) and is overall rather ugly. I have neither time nor desire to clean it up. No extensive testing has been done (for example, I always used square images even though in theory the code should support arbitrary aspect ratio).
 I put complete distribution here which includes both synthesis part and user interface functions along with some support structures. UI uses GLUT library, so you better get it if you want to run the code as is. The code is known to compile and run on SGI R10000 - just make obvious modifications for $GLUTHOME in the makefile and type 'make'.
 After compiling, run 'texture' to get usage information. With correct parameters, the program should bring up three windows: one with sample texture, second is the one you can draw at (target image is displayed there) and the third is with the result. To choose color to use for drawing, click on the sample image. Right mouse button brings up the options menu. Note that current version does not write anything out unless asked to by the user during session. Look inside the code to see the details (can be painful since there is almost no comments).
 The algorithm is implemented in file '' (the part which matters is the first ~430 lines). Other files are user interface, support structures, etc.
  Ok, here is the code directory but remeber: