Showing 24 posts
Today I would like to dive into the topic of unused parameters in C and C++: why they may happen and how to properly deal with them—because smart compilers will warn you about their presence should you enable -Wunused-parameter
or -Wextra
, and even error out if you are brave enough to use -Werror
.
You would think that unused parameters should never exist: if the parameter is not necessary as an input, it should not be there in the first place! That’s a pretty good argument, but it does not hold when polymorphism enters the picture: if you want to have different implementations of a single API, such API will have to provide, on input, a superset of all the data required by all the possible implementations.
February 16, 2015
·
Tags:
c, cxx
Continue reading (about
6 minutes)
Following up on the previous C++ post, here comes one more thing to consider when writing header files in this language.
using
and using namespace
The C++ using
directive and its more generic using namespace
counterpart, allow the programmer to bring a given symbol or all the symbols in a namespace, respectively, into the calling scope. This feature exists to simplify typing and, to some extent, to make the code more readable. (It may have come into existence to simplify the porting of old, non-ISO C++ code to modern C++, but that’s just a guess.)
December 5, 2013
·
Tags:
cxx, header-files
Continue reading (about
3 minutes)
Hoping you had a nice holiday break, it is now the time to resume our series on header files with a new topic covering the world of template definitions in C++.
If you have ever used the Boost libraries, you may have noticed that aside from the regular hpp
header files, they also provide a bunch of accompanying ipp
files.
ipp
files, in the general case, are used to provide the implementation for a template class defined in a corresponding hpp
file. This stems from the fact that, in C++, the code for such template definitions must be available whenever the class is instantiated, and this in turn means that the template definitions cannot be placed in separate modules as is customary with non-template classes. In other words: putting the template definitions in cpp
files just does not work. (I hear the C++ standards committee wants to “fix” this but I forget the details now and cannot find a reference.)
December 2, 2013
·
Tags:
cxx, header-files
Continue reading (about
2 minutes)
Somebody recently tweeted me this message:
As a strong C++ dev and googler (hopefully with some #golang exposure), what’s your opinion on @rob_pike post? (goo.gl/xlMi4)
The answer deserves much more than my original reply included, so here it goes.
First of all, I found Rob’s article quite interesting. Basically, the authors of Go never expected Go to be more widely adopted by Python users than C++ users. In fact, their original goal was to create a replacement for C++ as a systems programming language. The rationale for this is that C++ users like the verbosity and flexibility of the language, with all of its special cases, while Python users like simplicity and switch to Go when they look for the performance bump. This is all reasonable but there is one detail I don’t associate with: Rob claims that whoever is excited by the new C++11 features will not move to Go, because liking new C++ features implies that one likes all the flexibility of C++ and will not enjoy Go’s simplicity.
September 4, 2012
·
Tags:
cxx
Continue reading (about
3 minutes)
In the previous post, I discussed the type-safe tree data structure that is now in the Kyua codebase, aimed at representing the configuration of the program. In this post, we'll see how this data structure ties to the parsing of the configuration file.
One goal in the design of the configuration file was to make its contents a simple key/value association (i.e. assigning values to predetermined configuration variables). Of course, the fact that the configuration file is just a Lua script means that additional constructions (conditionals, functions, etc.) can be used to compute these values before assignment, but in the end all we want to have is a collection of values for known keys. The tree data structure does exactly the latter: maintain the mapping of keys to values, and ensuring that only a set of "valid" keys can be set. But, as a data structure, it does not contain any of the "logic" involved in computing those values: that is the job of the script.
Now, consider that we have the possible following syntaxes in the configuration file:
simple_variable = "the value"
complex.nested.variable = "some other value"
These assignments map, exactly, to a tree::set() function call: the name of the key is passed as the first argument to tree::set() and the value is passed as the second argument. (Let's omit types for simplicity.) What we want to do is modify the Lua environment so that these assignments are possible, and that when such assignments happen, the internal tree object gets updated with the new values.
In order to achieve this, the configuration library modifies the Lua environment as follows:
June 2, 2012
·
Tags:
cxx, kyua, lua
Continue reading (about
3 minutes)
The core component of the new configuration library in Kyua is the utils::config::tree class: a type-safe, dynamic tree data type. This class provides a mapping of string keys to arbitrary types: all the nodes of the tree have a textual name, and they can either be inner nodes (no value attached to them), or leaf nodes (an arbitrary type attached as a value to them). The keys represent traversals through such tree, and do this by separating the node names with dots (things of the form root.inner1.innerN.leaf).
The tree class is the in-memory representation of a configuration file, and is the data structure passed around methods and algorithms to tune their behavior. It replaces the previous config static structure.
The following highlights describe the tree class:
config::tree tree;
// Predefine the valid keys.
tree.define< config::string_node >("kyua.architecture");
tree.define< config::int_node >("kyua.timeout");
// Populate the tree with some sample values.
tree.set< config::string_node >("kyua.architecture", "powerpc");
tree.set< config::int_node >("kyua.timeout", 300);
// Query the sample values.
const std::string architecture =
tree.lookup< config::string_node >("kyua.architecture");
const int timeout =
tree.lookup< config::int_node >("kyua.timeout");
tree.set_string("kyua.architecture", "powerpc");
tree.set_string("kyua.timeout", "300");
config::tree tree;
// Predefine a subtree as dynamic.
tree.define_dynamic("test_suites");
// Populate the subtree with fictitious values.
tree.set< config::string_node >("test_suites.NetBSD.ffs", "ext2fs");
tree.set< config::int_node >("test_suites.NetBSD.iterations", 5);
// And the querying would happen exactly as above with lookup().
May 29, 2012
·
Tags:
cxx, kyua
Continue reading (about
5 minutes)
In the previous blog post, I described the problems that the implementation of the Kyua configuration file parsing and in-memory representation posed. I also hinted that some new code was coming and, after weeks of work, I'm happy to say that it has just landed in the tree!
May 28, 2012
·
Tags:
cxx, kyua, lua
Continue reading (about
4 minutes)
A couple of years ago, when Kyua was still a newborn, I wrote a very ad-hoc solution for the parsing and representation of its configuration files. The requirements for the configuration were minimal, as there were very few parameters to be exposed to the user. The implementation was quick and simple to allow further progress on other more-important parts of the project. (Yep, quick is an euphemism for dirty: the implementation of the "configuration class" has to special-case properties everywhere to deal with their particular types... just as the Lua script has to do too.)
May 26, 2012
·
Tags:
cxx, kyua
Continue reading (about
3 minutes)
As you may already know, RAII is a very powerful and popular pattern in the C++ language. With RAII, you can wrap non-stack-managed resources into a stack-managed object such that, when the stack-managed object goes out of scope, it releases the corresponding non-stack-managed object. Smart pointers are just one example of this technique, but so are IO streams too.
Before getting into the point of the article, bear with me for a second while I explain what the stack_cleaner object of Lutok is. The "stack cleaner" takes a reference to a Lua state and records the height of the Lua stack on creation. When the object is destroyed (which happens when the declaring function exits), the stack is returned to its previous height thus ensuring it is clean. It is always a good idea for a function to prevent side-effects by leaving its outside world as it was — and, like it or not, the Lua state is part of the outside world because it is an input/output parameter to many functions.
Let's consider a piece of code without using the stack cleaner:
void
my_function(lutok::state& state, const int foo)
{
state.push_integer(foo);
... do something else in the state ...
const int bar = state.to_integer();
if (bar != 3) {
state.pop(1);
throw std::runtime_error("Invalid data!");
}
state.pop(1);
}
void
my_function(lutok::state& state, const int foo)
{
state.push_integer(foo);
try {
... do something else in the state ...
const int bar = state.to_integer();
if (bar != 3
throw std::runtime_error("Invalid data!");
} catch (...) {
state.pop(1);
throw;
}
state.pop(1);
}
void
my_function(lutok::state& state, const int foo)
{
lutok::stack_cleaner cleaner(state);
state.push_integer(foo);
... do something else in the state ...
const int bar = state.to_integer();
if (bar != 3)
throw std::runtime_error("Invalid data!");
}
lutok::stack_cleaner(state);
lutok::stack_cleaner ANY_NAME_HERE(state);
September 17, 2011
·
Tags:
cxx, lua, lutok
Continue reading (about
4 minutes)
It has finally happened. Lutok is the result of what was promised in the "Splitting utils::lua from Kyua" web post.
Quoting the project web page:
Lutok provides thin C++ wrappers around the Lua C API to ease the interaction between C++ and Lua. These wrappers make intensive use of RAII to prevent resource leakage, expose C++-friendly data types, report errors by means of exceptions and ensure that the Lua stack is always left untouched in the face of errors. The library also provides a small subset of miscellaneous utility functions built on top of the wrappers.Coming up with a name for this project was quite an odyssey, and is what has delayed is release more than I wanted. My original candidate was "luawrap" which, although not very original, was to-the-point and easy to understand. Unfortunately, that name did not clear with the legal department and I had to propose several other names, some of which were not acceptable either. Eventually, I settled with "Lutok", which comes from "LUa TOolKit".
Lutok focuses on providing a clean and safe C++ interface; the drawback is that it is not suitable for performance-critical environments. In order to implement error-safe C++ wrappers on top of a Lua C binary library, Lutok adds several layers or abstraction and error checking that go against the original spirit of the Lua C API and thus degrade performance.
Lutok was originally developed within Kyua but was later split into its own project to make it available to general developers.
September 15, 2011
·
Tags:
announce, cxx, kyua, lua, lutok
Continue reading (about
2 minutes)
If you remember a post from January titled C++ interface to Lua for Kyua (wow, time flies), the Kyua codebase includes a small library to wrap the native Lua C library into a more natural C++ interface. You can take a look at the current code as of r129.
Quoting the previous post:
The utils::lua library provides thin C++ wrappers around the Lua C API to ease the interaction between C++ and Lua. These wrappers make intensive use of RAII to prevent resource leakage, expose C++-friendly data types, report errors by means of exceptions and ensure that the Lua stack is always left untouched in the face of errors. The library also provides a place (the operations module) to add miscellaneous utility functions built on top of the wrappers.While the RAII wrappers and other C++-specific constructions are a very nice thing to have, this library has to jump through a lot of hoops to interact with binary Lua versions built for C. This makes utils::lua not usable for performance-critical environments. Things would be way easier if utils::lua linked to a Lua binary built for C++, but unfortunately that is not viable in most, if not all, systems with binary packaging systems (read: most Linux distributions, BSD systems, etc.).
September 3, 2011
·
Tags:
cxx, kyua, lua
Continue reading (about
3 minutes)
The C++ interface to Lua implemented in Kyua exposes a lua::state class that wraps the lower-level lua_State* type. This class completely hides the internal C type of Lua to ensure that all calls that affect the state go through the lua::state class.
Things get a bit messy when we want to inject native functions into the Lua environment. These functions follow the prototype represented by the lua_CFunction type:
typedef int (*lua_CFunction)(lua_State*);Now, let's consider this code:
intThe fact that we must pass a lua_CFunction prototype to the lua_pushcfunction object means that such function must have access to the raw lua_State* pointer... which we want to avoid.
awesome_native_function(lua_State* state)
{
// Uh, we have access to s, so we bypass the lua::state!
... do something nasty ...
// Oh, and we can throw an exception here...
//with bad consequences.
}
void
setup(...)
{
lua::state state;
state.push_c_function(awesome_native_function);
state.set_global("myfunc");
... run some script ...
}
typedef int (*cxx_function)(lua::state&)In an ideal world, the lua::state class would implement a push_cxx_function that took a cxx_function, generated a thin C wrapper and injected such generated wrapper into Lua. Unfortunately, we are not in an ideal world: C++ does not have high-order functions and thus the "generate a wrapper function" part of the previous proposal does not really work.
template< cxx_function Function >This template wrapper takes a cxx_function object and generates a corresponding C function at compile time. This wrapper function ensures that C++ state does not propagate into the C world, as that often has catastrophical consequences. (Due to language limitations, the input function must have external linkage. So no, it cannot be static.)
int
wrap_cxx_function(lua_State* state)
{
try {
lua::state state_wrapper(state);
return Function(state_wrapper);
} catch (...) {
luaL_error(state, "Geez, don't go into C's land!");
}
}
intNeat? I think so, but maybe not so much. I'm pretty sure there are cooler ways of achieving the above purpose in a cleaner way, but this one works nicely and has few overhead.
awesome_native_function(lua::state& state)
{
// See, we cannot access lua_State* now.
... do something ...
throw std::runtime_error("And we can even do this!");
}
void
setup(...)
{
lua::state state;
state.push_c_function(
wrap_cxx_function< awesome_native_function >);
state.set_global("myfunc");
... run some script ...
}
January 17, 2011
·
Tags:
cxx, kyua, lua
Continue reading (about
2 minutes)
About a week ago, I detailed the different approaches I encountered to deal with errors raised by the Lua C API. Later, I announced the new C++ interface for Lua implemented within Kyua. And today, I would like to talk about the specific mechanism I implemented in this library to deal with the Lua errors.
The first thing to keep in mind is that the whole purpose of Lua in the context of Kyua is to parse configuration files. This is an infrequent operation, so high performance does not matter: it is more valuable to me to be able to write robust algorithms fast than to have them run at optimal speed. The other key point to consider is that I want Kyua to be able to use prebuilt Lua libraries, which are built as C binaries.
The approach I took is to wrap every single unsafe Lua C API call in a "thin" (FSVO thin depending on the case) wrapper that gets called by lua_pcall. Anything that runs inside the wrapper is safe to Lua errors, as they are caught and safely reported to the caller.
Lets examine how this works by taking a look at an example: the wrapping of lua_getglobal. We have the following code (copy pasted from the utils/lua/wrap.cpp file but hand-edited for publishing here):
static intThe state::get_global method is my public wrapper for the lua_getglobal Lua C API call. This wrapper first prepares the Lua stack by pushing the address of the C function to call and its parameters and then issues a lua_pcall call that executes the C function in a Lua protected environment.
protected_getglobal(lua_State* state)
{
lua_getglobal(state, lua_tostring(state, -1));
return 1;
}
void
lua::state::get_global(const std::string& name)
{
lua_pushcfunction(_pimpl->lua_state, protected_getglobal);
lua_pushstring(_pimpl->lua_state, name.c_str());
if (lua_pcall(_pimpl->lua_state, 1, 1, 0) != 0)
throw lua::api_error::from_stack(_pimpl->lua_state,
"lua_getglobal");
}
January 14, 2011
·
Tags:
cxx, kyua, lua
Continue reading (about
3 minutes)
Finally! After two weeks of holidays work, I have finally been able to submit Kyua's r39: a generic library that implements a C++ interface to Lua. The code is hosted in the utils/lua/ subdirectory.
From the revision description:
The utils::lua library provides thin C++ wrappers around the Lua C API to ease the interaction between C++ and Lua. These wrappers make intensive use of RAII to prevent resource leakage, expose C++-friendly data types, report errors by means of exceptions and ensure that the Lua stack is always left untouched in the face of errors. The library also provides a place (the operations module) to add miscellaneous utility functions built on top of the wrappers.In other words: this code aims to decouple all details of the interaction with the Lua C API from the main code of Kyua so that the high level algorithms do not have to worry about Lua C API idiosyncrasies.
January 8, 2011
·
Tags:
cxx, kyua, lua
Continue reading (about
2 minutes)
Some of the methods of the Lua C API can raise errors. To get an initial idea on what these are, take a look at the Functions and Types section and pay attention to the third field of a function description (the one denoted by 'x' in the introduction).
my_array = nil... which is obvious because indexing a non-table object is a mistake. Now let's consider how this code would look like in C (modulo the my_array assignment):
return my_array["test"]
lua_getglobal(state, "my_array");Simple, huh? Sure, but as it turns out, any of the API calls (not just lua_gettable) in this code can raise errors (I'll call them unsafe functions). What this means is that, unless you run the code with a lua_pcall wrapper, your program will simply exit in the face of a Lua error. Uh, your scripting language can "crash" your host program out of your control? Not nice.
lua_pushstring(state, "test");
lua_gettable(state, -2);
January 7, 2011
·
Tags:
c, cxx, lua
Continue reading (about
6 minutes)
For the last couple of days, I have been playing around with the Lua C API and have been writing a thin wrapper library for C++. The main purpose of this auxiliary library is to ensure that global interpreter resources such as the global state or the execution stack are kept consistent in the presence of exceptions — and, in particular, that none of these are leaked due to programming mistakes when handling error codes.
To illustrate this point, let's forget about Lua and consider a simpler case. Suppose we lost the ability to pass arguments and return values from functions in C++ and all we have is a stack that we pass around. With this in mind, we could implement a multiply function as follows:
void multiply(std::stack< int >& context) {And we could call our function as this:
const int arg1 = context.top();
context.pop();
const int arg2 = context.top();
context.pop();
context.push(arg1 * arg2);
}
std::stack< int > context;In fact, my friends, this is more-or-less what your C/C++ compiler is internally doing when converting code to assembly language. The way the stack is organized to perform calls is known as the calling conventions of an ABI (language/platform combination).
context.push(5);
context.push(6);
multiply(context);
const int result = s.top();
s.pop();
void magic(std::stack< int >& context) {The above is a completely fictitious and useless function, but serves to illustrate the point. magic() starts by pushing two values on the stack and then performs some computation that reads these two values. It later pushes an additional value and does some more computations on the three temporary values that are on the top of the stack.
const int arg1 = context.top();
context.pop();
const int arg2 = context.top();
context.pop();
context.push(arg1 * arg2);
context.push(arg1 / arg2);
try {
... do something with the two values on top ...
context.push(arg1 - arg2);
try {
... do something with the three values on top ...
} catch (...) {
context.pop(); // arg1 - arg2
throw;
}
context.pop();
} catch (...) {
context.pop(); // arg1 / arg2
context.pop(); // arg1 * arg2
throw;
}
context.pop();
context.pop();
}
class temp_stack {With this, we can rewrite our function as:
std::stack< int >& _stack;
int _pop_count;
public:
temp_stack(std::stack< int >& stack_) :
_stack(stack_), _pop_count(0) {}
~temp_stack(void)
{
while (_pop_count-- > 0)
_stack.pop();
}
void push(int i)
{
_stack.push(i);
_pop_count++;
}
};
void magic(std::stack< int >& context) {Simple, huh? Our temp_stack function keeps track of how many elements have been pushed on the stack. Whenever the function terminates, be it due to reaching the end of the body or due to an exception thrown anywhere, the temp_stack destructor will remove all elements previously registered from the stack. This ensures that the function leaves the global state (the stack) as it was on entry — modulo the function parameters consumed as part of the calling conventions.
const int arg1 = context.top();
context.pop();
const int arg2 = context.top();
context.pop();
temp_stack temp(context);
temp_stack.push(arg1 * arg2);
temp_stack.push(arg1 / arg2);
... do something with the two values on top ...
temp_stack.push(arg1 - arg2);
... do something with the three values on top ...
// Yes, we can return now. No need to do manual pop()s!
}
December 27, 2010
·
Tags:
c, cxx, kyua, lua
Continue reading (about
6 minutes)
As part of the project I'm currently involved in at university, I started (re)writing a Pin tool to gather run-time traces of applications parallelized with OpenMP. This tool has to support two modes: one to generate a single trace for the whole application and one to generate one trace per parallel region of the application.
In the initial versions of my rewrite, I followed the idea of the previous version of the tool: have a -split flag in the frontend that enables or disables the behavior described above. This flag was backed by an abstract class, Tracer, and two implementations: PlainTracer and SplittedTracer. The thread-initialization callback of the tool then allocated one of these objects for every new thread and the per-instruction injected code used a pointer to the interface to call the appropriate specialized instrumentation routine. This pretty much looked like this:
voidI knew from the very beginning that such an implementation was going to be inefficient due to the pointer dereference at each instruction and the vtable lookup for the correct virtual method implementation. However, it was a very quick way to move forward because I could reuse some small parts of the old implementation.
thread_start_callback(int tid, ...)
{
if (splitting)
tracers[tid] = new SplittedTracer();
else
tracers[tid] = new PlainTracer();
}
void
per_instruction_callback(...)
{
Tracer* t = tracers[PIN_ThreadId()];
t->instruction_callback(...);
}
template< class Tracer >What this design also does is force me to have two different Pin tools: one for plain tracing and another one for splitted tracing. Of course, I chose it to be this way because I'm not a fan of run-time options (the -split flag). Having two separate tools with well-defined, non-optional features makes testing much, much easier and... follows the Unix philosophy of having each tool do exactly one thing, but doing it right!
class BasicTool {
Tracer* tracers[MAX_THREADS];
Tracer* allocate_tracer(void) const = 0;
public:
Tracer*
get_tracer(int tid)
{
return tracers[tid];
}
};
class PlainTool : public BasicTool< PlainTracer > {
PlainTracer*
allocate_tracer(void) const
{
return new PlainTracer();
}
public:
...
} the_plain_tool;
// This is tool-specific, non-templated yet.
void
per_instruction_callback(...)
{
the_plain_tool.get_tracer(PIN_ThreadId()).instruction_callback(...);
}
May 7, 2009
·
Tags:
cxx, pin
Continue reading (about
3 minutes)
By pure chance when trying to understand a build error of some C++ code I'm working on, I came across the correct C++ way of checking for numeric limits. Here is how.
In C, when you need to check for the limits of native numeric types, such as int or unsigned long, you include the limits.h header file and then use the INT_MIN/INT_MAX and ULONG_MAX macros respectively. In the C++ world, there is a corresponding climits header file to get the definition of these macros, so I always thought this was the way to follow.
However, it turns out that the C++ standard defines a limits header file too, which provides the numeric_limits<T> template. This template class is specialized in
As an example, this C code:
#include <limits.h>becomes the following in C++:
#include <stdio.h>
#include <stdlib.h>
int
main(void)
{
printf("Integer range: %d to %dn", INT_MIN, INT_MAX);
return EXIT_SUCCESS;
}
#include <cstdlib>Check out the documentation for more details on additional methods!
#include <iostream>
#include <limits>
int
main(void)
{
std::cout << "Integer range: "
<< std::numeric_limits< int >::min()
<< " to "
<< std::numeric_limits< int >::max()
<< "n";
return EXIT_SUCCESS;
}
May 4, 2009
·
Tags:
cxx
Continue reading (about
2 minutes)
In the past, I had come by some C++ code that used unnamed namespaces everywhere as the following code shows, and I didn't really know what the meaning of it was:
namespace {Until now.
class something {
...
};
} // namespace
March 23, 2009
·
Tags:
cxx
Continue reading (about
2 minutes)
For a long time, ATF has shipped with build-time tests for its own header files to ensure that these files are self-contained and can be included from other sources without having to manually pull in obscure dependencies. However, the way I wrote these tests was a hack since the first day: I use automake to generate a temporary library that builds small source files, each one including one of the public header files. This approach works but has two drawbacks. First, if you do not have the source tree, you cannot reproduce these tests -- and one of ATF's major features is the ability to install tests and reproduce them even if you install from binaries, remember? And second, it's not reusable: I now find myself needing to do this exact same thing in another project... what if I could just use ATF for it?
Even if the above were not an issue, build-time checks are a nice thing to have in virtually every project that installs libraries. You need to make sure that the installed library is linkable to new source code and, currently, there is no easy way to do this. As a matter of fact, the NetBSD tree has such tests and they haven't been migrated to ATF for a reason.
I'm trying to implement this in ATF at the moment. However, running the compiler in a transparent way is a tricky thing. Which compiler do you execute? Which flags do you need to pass? How do you provide a portable-enough interface for the callers?
The approach I have in mind involves caching the same compiler and flags used to build ATF itself and using those as defaults anywhere ATF needs to run the compiler itself. Then, make ATF provide some helper check functions that call the compiler for specific purposes and hide all the required logic inside them. That should work, I expect. Any better ideas?
March 5, 2009
·
Tags:
atf, c, cxx
Continue reading (about
2 minutes)
If you are a frequent C/C++ programmer, you know how annoying a code plagued of preprocessor conditionals can be: they hide build problems quite often either because of, for example, trivial syntax errors or unused/undefined variables.
I was recently given some C++ code to rewrite^Wclean up and one of the things I did not like was a macro called DPRINT alongside with its use of fprintf. Why? First because this is C++, so you should be using iostreams. Second because by using iostreams you do not have to think about the correct printf-formatter for every type you need to print. And third because it obviously relied on the preprocessor and, how not, debug builds were already broken.
I wanted to come up with an approach to print debug messages that involved the preprocessor as less as possible. This application (a simulator) needs to be extremely efficient in non-debug builds, so leaving calls to printf all around that internally translated to noops at runtime wasn't a nice option because some serious overhead would still be left. So, if you don't use the preprocessor, how can you achieve this? Simple: current compilers have very good optimizers so you can rely on them to do the right thing for release builds.
The approach I use is as follows: I define a custom debug_stream class that contains a reference to a std::ostream object. Then, I provide a custom operator<< that delegates the insertion to the output stream. Here is the only place where the preprocessor is involved: a small conditional is used to omit the delegation in release builds:
template< typename T >There is also a global instance of a debug_stream called debug. With this in mind, I can later print debugging messages anywhere in the code as follows:
inline
debug_stream&
operator<<(debug_stream& d, const T& t)
{
#if !defined(NDEBUG)
d.get() << t;
#endif // !defined(NDEBUG)
return d;
}
debug << "This is a debug message!n";So how does this not introduce any overhead in release builds?
March 2, 2009
·
Tags:
cxx
Continue reading (about
3 minutes)
A rather long while ago, I published a little teaser on std::set and people seemed to like it quite a bit. So here goes another one based on a problem a friend has found at work today. I hope to reproduce the main idea behind the problem correctly, but my memory is a bit fuzzy.
struct data {Tip: If you make base
int field;
};
template< typename Data >
class base {
public:
virtual ~base(void)
{
}
virtual bool equals(const Data& a,
const Data& b) const
{
return a == b;
}
};
class child : public base< data > {
public:
bool equals(const data& a,
const data& b) const
{
return a.field == b.field;
}
};
int
main(void)
{
data d1, d2;
base< data >* c = new child();
(void)c->equals(d1, d2);
delete c;
return 0;
}
October 22, 2008
·
Tags:
cxx
Continue reading (about
1 minute)
This does not build. Can you guess why? Without testing it?
std::set< int > numbers;Update (23:40): John gave a correct answer in the comments.
for (int i = 0; i < 10; i++)
numbers.insert(i);
for (std::set< int >::iterator iter = numbers.begin();
iter != numbers.end(); iter++) {
int& i = *iter;
i++;
}
February 8, 2008
·
Tags:
cxx, teasert
Continue reading (about
1 minute)
This article first appeared on this date in O’Reilly’s ONLamp.com online publication. The content was deleted sometime in 2019 but I was lucky enough to find a copy in the WayBack Machine. I reformatted the text to fit the style of this site and fixed broken links, but otherwise the content is a verbatim reproduction of what was originally published.
C++, with its complex and complete syntax, is a very versatile language. Because it supports object-oriented capabilities and has powerful object libraries—such as the STL or Boost—one can quickly implement robust, high-level systems. On the other hand, thanks to its C roots, C++ allows the implementation of very low-level code. This has advantages but also carries some disadvantages, especially when one attempts to write high-level applications.
May 4, 2006
·
Tags:
cxx, featured, onlamp, programming
Continue reading (about
17 minutes)