Thursday, April 15, 2010

Questions

If you have questions on Boost.Extension, feel free to contact me:

Contact Jeremy Pack

Friday, March 26, 2010

Hanging Chain with Varying Weights

Here's another hanging chain scenario. The red links are a little longer and much heavier.

Thursday, March 25, 2010

The Hanging Chain Problem - or Catenary

Here's a video from some software I just wrote for the class I've been taking part time (you should never stop learning!)



Here's my description of the problem from my report:

Consider a chain hung from the ceiling. How can we predict its
shape as it hangs? Each link in the chain will naturally fall until it
is pulled back by neighboring links. As the links fall, the potential
energy of the chain decreases until the chain has reached it’s final
shape.
If we minimize the potential energy of the whole chain, while keep-
ing the links connected, we can find this final shape. The most obvious
method to solve this problem would be to slowly attempt to move the
links of the chain downward without unlinking them. Each link would
move a little bit at a time, eventually reaching its final position.
However, such a method could be very slow - especially for large
chains. First of all, the actual movement of each link is restricted
by the neighboring links, meaning that only infinitesimal movements
could be made at a time. In addition, moving any link will affect every
other link in the chain.


I'll add a few details of my solution once everyone else has turned their projects in.

Friday, January 29, 2010

Run-time Compilation - Performance

I put together an example of compiling code generated at run-time, and then loading it using Extension:

https://svn.boost.org/trac/boost/browser/sandbox/libs/extension/examples/runtime_compilation

I'm preparing some experiments to compare the performance of numerical code compiled in advance to that compiled at run-time. There will be three test cases involving complex matrix operations:

  1. The matrix sizes is known at compile-time, and can be used by the compiler for optimization. The matrix computations are done in the main binary.
  2. The matrix sizes are not known at compile-time. The matrix computations are done in the main binary using algorithms that work for arbitrarily-sized matrices.
  3. The matrix sizes are not known when the binary is compiled. When the binary loads a matrix, it will generate code on the fly to process the matrix. It will then compile the code into a shared library, load the shared library, and run the algorithm.
My hypothesis is that it will be possible to make the runtime-generated code faster than the arbitrarily-sized matrix code, for the following reasons:
  • The compiler will be able to make certain optimizations that it can only do when it knows the size of the arrays being looped over.
  • I'll be able to use fewer variables and more constants in the compiled code.
 I expect it to be slower than the specialized pre-compiled code for two reasons though:
  • Code in a shared library usually runs more slowly than code in the binary itself. One reason for this is that shared libraries do not know which address space they will be running at during run-time, and can't hardcode as many pointers and such as an executable. (Note that operating system libraries often have an optimization to avoid this problem, by using reserved address space)
  • Even if it did run as fast, it has to be compiled and loaded first.
Any opinions? I hope to post preliminary results in the next couple weeks or so.

Explanation

I consider Extension basically complete - quite a number of people are using it in projects. The main issue holding me back from submitting it for review by the Boost community is that I wanted to make the Reflection library part of Extension in a natural way, and I've been struggling at finding a balance between flexibility, performance and readability for the Reflection API. Also, of course, the Boost library review process is a bit painful - I'd need to block out at least 100 hours of my time to make sure the documentation, API, code, tests etc. were correct and easy to follow.

My previous post shouldn't be considered as much an obituary as an explanation of the slow progress lately. As I mentioned, I haven't been able to make much progress on the Reflection/Extension integration and new Extension features partially because it's difficult to design an API without having code that requires it.

If you find any features lacking in Extension, or aren't sure how to get the functionality you need, feel free to describe your use case to me - I can add the relevant functionality or documentation.

Also, always feel free to send patches. Extension is in the Boost sandbox:

https://svn.boost.org/trac/boost/browser/sandbox/libs/extension
https://svn.boost.org/trac/boost/browser/sandbox/boost/extension

Thursday, January 28, 2010

Alternatives to Plugins

I honestly haven't put much time into Boost.Extension and Boost.Reflection lately. My current work has no need for plugins of that type - and it's difficult to continue designing something without good current use cases to consider.

I've actually been surprised by the use cases many people propose to me in e-mail. Some people are looking for a full reimplementation of COM, and want plugins that can be compiled with a very different set of compiler options than the binary loading the plugin. In most cases, I felt like their use case did not justify such a complex solution.

So, do you really need C++ plugins? Consider the following:

  1. Do you need C++ performance? Or would a Boost.Python solution of integrating plugins be more effective? Python is certainly easier to deploy.
  2. Do you really need plugins? Or do you just need to compile your binary with different modules for different users?
  3. Are you shipping your source code to users to compile? If so, they can just compile in the plugins instead of putting them in a shared library.
If C++ plugins are still desired, here are my recommendations:
  1. Always compile the binary and plugins with the same compiler options.
  2. If you really need different compiler options:
    1. Consider building a distributed system, with the binary and the plugin communicating over network sockets or IPC.
    2. Consider using a plain C API, with very simple types being passed between the binary and the plugin (even structs can be a problem, since they may have different sizes depending on compiler options).
I plan to start posting here about some of my current work on the performance of optimization algorithms. I'm building a framework to automatically instrument and test different linear and nonlinear algorithms. I also hope to get back to dabbling in OpenGL...