You are on page 1of 15

#define tovec(x) x[i]

#define loopc(expr, size) forloop(i, size()) expr

loopc( (tovec(a) + tovec(b) * x + tovecc[c] * tovecc(d) + y), 3)

a = a + b * x + c * d + y
a[i] = a[i] + b[i] * x + c[i] * d[i] + y

Members privilege of mutation: Operators which change the first operand (eg +=, =,
prefix ++) should be implemented as member functions, and should exclusively
implement the guts of all overloads. Postfix ++ is a second-class citizen; it is
implemented as Obj ret = *this; ++ this; return ret;. Note that this sometimes
extends to copy-constructors, which may contain *this = initializer.

Rule of freedom for commuters: Only commutative operators (eg /) should be free
functions; all other operators (eg unary anything) should be members. Commutative
operators inherently make a copy of the object; they are implemented as Obj ret =
lhs; ret @= rhs; return ret; where @ is the commutative operator and lhs and rhs
are left-hand side and right-hand side arguments, respectively.

Golden rule of C++ friendship: Avoid friendship. friend pollutes the semantics of a
design. Overloading corollary: Overloading is simple if you follow the above rules,
then friend is harmless. friending boilerplate overload definitions allows them to
be placed inside the class { braces.

Note that some operators cannot be free functions: =, ->, [], and (), because the
standard specifically says so in section 13.5. I think that's all… I thought unary
& and * were too, but I was apparently wrong. They should always be overloaded as
members, though, and only after careful thought!

--------------------
Overloads for +=, +, -=, -, etc. have a special pattern:

struct Vector2 {
float x, y;
Vector2& operator+=(Vector2 const& other) {
x += other.x;
y += other.y;
return *this;
}
Vector2& operator-=(Vector2 const& other) {
x -= other.x;
y -= other.y;
return *this;
}
};
Vector2 operator+(Vector2 a, Vector2 const& b) {
// note 'a' is passed by value and thus copied
a += b;
return a;
}
Vector2 operator-(Vector2 a, Vector2 const& b) { return a -= b; } // compact
-----------------------------------------------------------------------------------
--

int getIndex(int row, int col) { return row*NCOLS+col; }


int myArray[3] = {1,2,3};
std::array<int, 3> a = {{1, 2, 3}};
std::array<std::array<int,5>,20> sarr;
T c[N];
std::array<T, N> cpp;
// from C to C++
std::copy(std::begin(c), std::end(c), std::begin(cpp));
// from C++ to C
std::copy(std::begin(cpp), std::end(cpp), std::begin(c));

std::vector<std::vector<int> > varr(4, std::vector<int>(4));

int a[5] = {1,2,3,4,5};


int b[5] = {5,4,3,2,1};
memcpy(a, b, sizeof(a));
std::copy(std::begin(b), std::end(b), std::begin(a));

template <typename T, std::size_t N>


T* begin(T(&a)[N]) {
return &a[0];
}
template <typename T, std::size_t N>
T* end(T(&a)[N]) {
return begin(a) + N;
}

std::array<int, 5> a = {1,2,3,4,5};


std::array<int, 5> b = {5,4,3,2,1};
a = b;

---------------------------------------------------------------------------------
Derived* -> Base*: static_cast
Base* -> Derived*: dynamic_cast
const int* -> int*: const_cast
float* -> char*: reinterpret_cast
---------------------------------------------------------------------------------
---------
#define foreach(list, index) for(index = 0; index < list.size(); index++)

foreach(cookies, i)
printf("Cookie: %s", cookies[i]);

---------
#ifdef ARE_WE_ON_WIN32
#define close(parm1) _close (parm1)
#endif
In C++, the same could be obtained through the use of inline functions:
#ifdef ARE_WE_ON_WIN32
inline int close(int i) { return _close(i) ; }
#endif

#define ASSERT_THROW(condition) \
if (!(condition)) \
throw std::exception(#condition " is false");

#define ASSERT_RETURN(condition, ret_val) \


if (!(condition)) { \
assert(false && #condition); \
return ret_val; }
// should really be in a do { } while(false) but that's another discussion.

------------------------
String constants are sometimes better defined as macros since you can do more with
string literals than with a const char *.

e.g. String literals can be easily concatenated.

#define BASE_HKEY "Software\\Microsoft\\Internet Explorer\\"


// Now we can concat with other literals
RegOpenKey(HKEY_CURRENT_USER, BASE_HKEY "Settings", &settings);
RegOpenKey(HKEY_CURRENT_USER, BASE_HKEY "TypedURLs", &URLs);

If a const char * were used then some sort of string class would have to be used to
perform the concatenation at runtime:

const char* BaseHkey = "Software\\Microsoft\\Internet Explorer\\";


RegOpenKey(HKEY_CURRENT_USER, (string(BaseHkey) + "Settings").c_str(), &settings);
RegOpenKey(HKEY_CURRENT_USER, (string(BaseHkey) + "TypedURLs").c_str(), &URLs);

In C++11, I'd consider this to be the most important part (other than include
guards). Macros are really the best thing that we have for compile-time string
processing.

---------------------
Sometimes, you want to generate code that needs to be copy/pasted by the
precompiler:

#define RAISE_ERROR_STL(p_strMessage) \
do \
{ \
try \
{ \
std::tstringstream strBuffer ; \
strBuffer << p_strMessage ; \
strMessage = strBuffer.str() ; \
raiseSomeAlert(__FILE__, __FUNCSIG__, __LINE__, strBuffer.str().c_str()) \
} \
catch(...){} \
{ \
} \
} \
while(false)

which enables you to code this:

RAISE_ERROR_STL("Hello... The following values " << i << " and " << j << " are
wrong") ;

And can generate messages like:

Error Raised:
====================================
File : MyFile.cpp, line 225
Function : MyFunction(int, double)
Message : "Hello... The following values 23 and 12 are wrong"

Even with C++11, a lot of what your macro does can be left for a function to do:
#include <sstream>
#include <iostream>
using namespace std;
void trace(char const * file, int line, ostream & o)
{
cerr << file << ":" << line << ": " << static_cast<ostringstream &
>(o).str().c_str()<<endl;
}
struct Oss
{
ostringstream s; ostringstream & lval() { return s; }
};

#define TRACE(ostreamstuff) trace(__FILE__, __LINE__, Oss().lval()<<ostreamstuff)

int main()
{
TRACE("Hello " << 123); return 0;
}

-----------
You can enable additional logging in a debug build and disable it for a release
build without the overhead of a Boolean check. So, instead of:

void Log::trace(const char *pszMsg) {


if (!bDebugBuild) {
return;
}
// Do the logging
}

log.trace("Inside MyFunction");

You can have:


#ifdef _DEBUG
#define LOG_TRACE log.trace
#else
#define LOG_TRACE void
#endif

LOG_TRACE("Inside MyFunction");

When _DEBUG is not defined, this will not generate any code at all. Your program
will run faster and the text for the trace logging won't be compiled into your
executable.

inline void LogTrace(const char*) { if(DEBUG) doTrace(); }


should be optimized away in release builds.

-----------
#define my_free(x) do { free(x); x = NULL; } while (0)
template<class T> inline void destroy(T*& p) { delete p; p = 0; }

-----------
Yet another foreach macros. T: type, c: container, i: iterator

#define foreach(T, c, i) for(T::iterator i=(c).begin(); i!=(c).end(); ++i)


#define foreach_const(T, c, i) for(T::const_iterator i=(c).begin(); i!=(c).end(); +
+i)

Usage (concept showing, not real):

void MultiplyEveryElementInList(std::list<int>& ints, int mul)


{
foreach(std::list<int>, ints, i)
(*i) *= mul;
}

int GetSumOfList(const std::list<int>& ints)


{
int ret = 0;
foreach_const(std::list<int>, ints, i)
ret += *i;
return ret;
}

------------
#ifdef WIN32
#define TYPES_H "WINTYPES.H"
#else
#define TYPES_H "POSIX_TYPES.H"
#endif

#include TYPES_H

Much readable than implementing it in other ways, to my opinion.

-----------------------------------------------------------------------------------
-
#define malloc(x) my_debug_malloc(x, __FILE__, __LINE__)
#define free(x) my_debug_free(x, __FILE__, __LINE__)

----------
#define safe_divide(res, x, y) if (y != 0) res = x/y;

and then
if (something) safe_divide(b, a, x);
else printf("Something is not set...");
It actually becomes completely the wrong thing....

The if else problems can be solved by wrapping the macro body inside do { ... }
while(0). This behaves as one would expect with respect to if and for and other
potentially-risky control-flow issues. But yes, a real function is usually a better
solution. #define macro(arg1) do { int x = func(arg1); func2(x0); } while(0)
----------------------------------------

A constexpr symbolic constant must be given a value that is known at compile time.
For example:

constexpr int max = 100;


void use(int n)
{
constexpr int c1 = max+7; // OK: c1 is 107
constexpr int c2 = n+7; // Error: we don’t know the value of c2
// ...
}
To handle cases where the value of a “variable” that is initialized with a value
that is not known at compile time but never changes after initialization, C++
offers a second form of constant (a const). For Example:

constexpr int max = 100;


void use(int n)
{
constexpr int c1 = max+7; // OK: c1 is 107
const int c2 = n+7; // OK, but don’t try to change the value of c2
// ...
c2 = 7; // error: c2 is a const
}

Such “const variables” are very common for two reasons:

C++98 did not have constexpr, so people used const.


List item “Variables” that are not constant expressions (their value is not
known at compile time) but do not change values after initialization are in
themselves widely useful.

Reference : "Programming: Principles and Practice Using C++" by Stroustrup


------------------------------------------------------------------

ISO standard languages -


C/C++/C#, Javascript/ECMA Script, Ruby, SQL, Basic/Fortran/COBOL/Pascal, ISLISP

For cases where you have under a few thousand entries, linear search can be
amazingly fast thanks to modern cpu cache and prefetchers.

#define SUCCEEDED(hr) ((HRESULT)(hr) >= 0)


is in no way superior to the type safe:
inline bool succeeded(int hr) { return hr >= 0; }

compile-time constants in C
----------------------------
const int a = 5;
int vect1[a] = {1,2,3,4,5};
errors:
"excess elements in array initializer"
"variable-sized object may not be initialized"
"control reaches end of non-void function [-Wreturn-type]"
If you compile this with gcc which envokes a C compiler you'll get an error and
likewise with clang. This is because `const` does not declare a compile-time
constant in C, `const` is just a type qualifier which says a value in memory (Yes,
it's in memory. You can even use & on it) is read-only and no attempt to change
it's value will be made during the execution of the program.
if I use #define [variable], and put it into the array brackets, it works normally.
Why?
Because the pre-compiler takes this code:
#define ARRAY_SIZE 5
int vect1[ARRAY_SIZE] = {1,2,3,4,5};
And converts it to this:
int vect1[5] = {1,2,3,4,5};
enum { a = 5 }; is another solution
enums are by far the best way to declare compile-time constants in C.
If you want real compile-time constants in C use #define or, better yet, an enum.
-----------------------------------------------------------------------

In many programming environments for C and C-derived languages on 64-bit machines,


int variables are still 32 bits wide, but long integers and pointers are 64 bits
wide. These are described as having an LP64 data model.[41][42] Another alternative
is the ILP64 data model in which all three data types are 64 bits wide, and even
SILP64 where short integers are also 64 bits wide.[44][45] However, in most cases
the modifications required are relatively minor and straightforward, and many well-
written programs can simply be recompiled for the new environment with no changes.
Another alternative is the LLP64 model, which maintains compatibility with 32-bit
code by leaving both int and long as 32-bit. LL refers to the long long integer
type, which is at least 64 bits on all platforms, including 32-bit environments.
Many 64-bit platforms today use an LP64 model (including Solaris, AIX, HP-UX,
Linux, macOS, BSD, and IBM z/OS). Microsoft Windows uses an LLP64 model. The
disadvantage of the LP64 model is that storing a long into an int may overflow. On
the other hand, converting a pointer to a long will “work” in LP64. In the LLP64
model, the reverse is true. These are not problems which affect fully standard-
compliant code, but code is often written with implicit assumptions about the
widths of data types. C code should prefer (u)intptr_t instead of long when casting
pointers into integer objects.

----------------------------------------------------------

-----------------------------------------------------------------------------------
-------
Data Classes - a brilliant concept with half baked execution

Data classes in and on themselves are a brilliant idea. Have a look:

data class Person(val firstName: String, val lastName: String)

You specify the fields of a class and their types, and you get:

the specified fields


getter/setters
a constructor which does exactly what you would expect
hashCode
equals
toString
clone
various utilities

... all for free, without writing them. That means your hashCode() will never go
out of sync with your equals(). Your toString() will never forget to print a field
you just recently added to the class. All of this is possible because there is no
text representation for these things; the compiler just generates the bytecode
directly. It's a really cool concept and vastly superior to generating all of the
aforementioned things via an IDE (because the generated code can get out of sync
easily).
-----------------------------------------------------------------------------------
----------
You can improve that with typedef. Also, judging by all the code I have seen, most
folks (including myself until a few years ago) are unaware you can typedef a
function type, as opposed to a function pointer type.
Example:

typedef int callback(void *, int);

Now you can declare function pointers that look like ordinary pointers, and which
don't hide the fact that the variable or argument is a pointer in the type:

void operation(callback *cb, void *opaque, int arg) {


// ....
cb(opaque, arg);
}

Or…

struct handlers {
callback *too;
callback *bar;
callback *baz;
};
-----------------------------------------------------------------------------------
----------

I feel that the "error" handling is one of those "core things" that people have
rarely fully understood. Especially newbies starting are often confused and things
get muddled. In all honesty it took me +10 years myself to reach some kind of
clarity on this. Anyway I find it that when you lack that clarity your code will be
quite messy (isn't this always the case?) and things will get muddled. And since
error handling is so precarious your implementation quality will suffer
considerably.

So below is my take on this with 3 clear labels and guidelines on how to apply.
Hope this helps.

There are 3 kinds of "errors".

1. Bugs.

- created by programmer.

- invalid state of the application - >it has transcended it's own logical realm and
you can't reason about its behaviour anymore.

- null pointers, OOB, etc.

2. Errors that are "expected" part of the program execution.

- you need to write logic flow to deal with this

- it's expected and quite normal that this might happen

- incorrect (user) input, file not found, socket timed out etc.

- this is really just an error from the end user's perspective.

3. Errors that are "unexpected".

- some very unexpected error


- system resource allocation failed, out of memory, out of file handles etc.

- critical resource was not accessible (for example a "must have" config file was
not found)

How to deal with these?

1. Abort and dump core. Yep seriously, just do it. Blow up with a bang and leave a
stack trace that you can analyze in the post-mortem debugger and see what went
wrong. As a result your application will be simpler and more straightforward.
Simply, don't try to write logic to deal with programmer failures. It will just
clutter your program, mask the problem and make fixing it harder.

int divide(int x, int y) {


if (y == 0) {
throw std::string("Divide by zero");
}
return x / y;
}

passing y=0 the function is clearly a bug. It'd be much better to core dump here.
Unless the function was designed with double purpose of validating the input and
performing the actual function. I'd much rather split these into two different
functions, since the core function may be called from different contexts, some of
which might not require any input validation at all even. And the validation
function clearly should not be using exceptions either since it's quite normal and
expected that input from external sources (such as the user) can be malformed and
you probably want to write logic that then bashes (sorry.. informs) the user about
their wrong input.

I also have my own assert macro that when violated terminates the process with a
core dump unless running in a debugger when it triggers a break point.

Figure out what the remaining viable state of the program is, report the error
(yes, loudly!), and recover. This would be the far more correct advice for no 1
(except for tiny binaries doing a single job only, as said).

2. Use error codes.

3. Use exceptions.

---
With exceptions, I can trivially pass error information through my whole call stack
without any manual work and boilerplate. The whole syntax clutter / OMG-but-try-
catch-is-so-ugly is a very stupid argument: with exceptions, you can opt out of
local error handling and opt-in at a higher level in the call stack without losing
any information.
It is possible with global or contextual error state.
This approach is used in iostream for example, and in a broken way in C library via
errno. (Which does not work unless you check it very near.)
Similar approach is often used in databases and file systems, marking an object as
dirty or broken.
Like the above, it is prone to ignoring errors and attempting to manipulate such
object.

I have seen that using error singletones together with logging at the point of
error worked rather nicely. Surely as a global state it is not thread safe, but
when the state belongs to a component with well defined API and uses once an error,
always an error strategy so error recovery requires constructing a new component,
then thread-safety is trivial to address.
A bonus of this approach is that error path through code is the same as for the
non-error case. Thus getting good coverage for error cases in unit and integration
tests is easier.

Monadic error handling is quite nice for domain specific errors, but for
exceptional situations which are not supposed to happen, you still need some sort
of an exception system. And even with domain error conditions, exceptions has a
nice property of saving a stack trace, which can make error hunting a bit simpler.
Exceptional situations as in crashes, like writing to 0x0 address?
Otherwise no, there is really no good reason for having some orthogonal value
returning system that can jump up the stack until it's caught (if ever.)
Many situations that are often considered exceptional are really not: cannot
connect to server, no such file or directory, cannot bind to port.
Any situation that prevents functionality from working and should not happen in
normal flow is exceptional.
Including such connection failures or file open failures. They may have to be
handled, but not at cost to the hot path.
You cannot typically just "eat" such an error with default behaviour and expect
whatever relied on it to work properly.
Check out std::optional from C++17

Early APIs returned error codes or set global error flags, but it was easy to
forget to check these.
Exceptions were introduced to force explicit error handling, but it was still hard
to know if a function could fail in practice.
Checked exceptions were introduced to make expected failure conditions more
explicit.
Checked exceptions prove hard to reconcile with higher-order functions, go back 2
spaces.
Either types give many of the advantages of unchecked exceptions and knowing that a
function can fail in practice, but only for functions where the caller is relying
on using the result.

An exception can carry more information about the source of a problem. OpenFile()
can throw FileNotFound or NoPermission or TooManyDescriptors etc. A None does not
carry this information.
Exceptions can be used in contexts that lack return values (e.g. with constructors
in languages that have them).
An exception allows you to very easily send the information up the stack, without a
lot of if None return None-style statements.
Exception handling almost always carries higher performance impact than just
returning a value.
Most importantly of all, an exception and a Maybe monad have different purposes -
an exception is used to signify a problem, while a Maybe isn't.
"Nurse, if there's a patient in room 5, can you ask him to wait?"
Maybe monad: "Doctor, there is no patient in room 5."
Exception: "Doctor, there is no room 5!"
(notice the "if" - this means the doctor is expecting a Maybe monad)

---
Something that we're seeing nowadays in real life is that many asynchronous
programming solutions are adopting a variant of the Either-style of error handling.
Consider Javascript promises, as detailed in any of these links:
The concept of promises allows you write asynchronous code like this (taken from
the last link):

var greetingPromise = sayHello();


greetingPromise
.then(addExclamation)
.then(function (greeting) {
console.log(greeting); // 'hello world!!!!’
}, function(error) {
console.error('uh oh: ', error); // 'uh oh: something bad happened’
});

Basically, a promise is an object that:


Represents the result of an asynchronous computation, which may or may not have
been finished yet;
Allows you to chain further operations to perform on its result, which will be
triggered when that result is available, and whose results in turn are available as
promises;
Allows you to hook up a failure handler that will be invoked if the promise's
computation fails. If there is no handler, then the error is propagated to later
handlers in the chain.
Basically, since the language's native exception support doesn't work when your
computation is happening across multiple threads, a promises implementation has to
provide an error-handling mechanism, and these turn out to be monads similar to
Haskell's Maybe/Either types.

It has nothing todo with threads. JavaScript in the browser always run in one
thread not on multiple threads. But you still cannot use exception because you
don't know when your function is called in the future. Asynchronous doesn't
automatically mean an involvement of threads. That's also the reason why you cannot
work with exceptions. You only can fetch an exception if you call a function and
immediately it gets executed. But the whole purpose of Asynchronous is that it runs
in the future, often when something other finished, and not immediately. That's why
you cannot use exceptions there.
---

The maybe monad is basically the same as most mainstream language's use of "null
means error" checking (except it requires the null to be checked), and has largely
the same advantages and disadvantages.
Well, it does not have the same disadvantages since it can be statically type
checked when used correctly. There is no equivalent of a null pointer exception
when using maybe monads (again, assuming they are used correctly)

Exception handling can be a real pain for factoring and testing. I know python
provides nice "with" syntax that allows you to trap exceptions without the rigid
"try ... catch" block. But in Java, for example, try catch blocks are big,
boilerplate, either verbose or extremely verbose, and hard to break up. On top of
that, Java adds all the noise around checked vs. unchecked exceptions.
If, instead, your monad catches exceptions and treats them as a property of the
monadic space (instead of some processing anomaly), then you're free to mix and
match functions you bind into that space regardless of what they throw or catch.
If, better yet, your monad prevents conditions where exceptions could happen (like
pushing a null check into Maybe), then even better. if...then is much, much easier
to factor and test than try...catch.
From what I've seen Go is taking a similar approach by specifying that each
function returns (answer, error). That's sort of the same as "lifting" the function
into a monad space where the core answer type is decorated with an error
indication, and effectively side-stepping throwing & catching exceptions.

Scala -
Option[T], use it when a value can be absent or some validation can fail and you
don’t care about the exact cause. Typically in data retrieval and validation logic.
Either[L,R], similar use case as Option but when you do need to provide some
information about the error.
Try[T], use when something Exceptional can happen that you cannot handle in the
function. This, in general, excludes validation logic and data retrieval failures
but can be used to report unexpected failures.
Exceptions, use only as a last resort. When catching exceptions use the facility
methods Scala provides and never catch { _ => }, instead use catch { NonFatal(_) =>
}

Eithers are superior error handling in every way.


However without helping syntax (like do-notation in Haskell) it becomes unwieldy
and hard to write and debug. Don't do
foo().leftMap([](x){whatever}).rightMap(...)
etc. I did that once, people still remind me of that fact and express their strong
dissatisfaction with it. And rightly so.
Instead do
Either<Err, Val> maybeResult = some_computation();
if (! maybeResult) {/* handle the error situation*/ return;}
Val& result = maybeResult.value();
/* proceed on happy path */

Either class in Java -


public class Either<A,B> {
private A left = null;
private B right = null;

private Either(A a,B b) {


left = a;
right = b;
}

public static <A,B> Either<A,B> left(A a) {


return new Either<A,B>(a,null);
}

public A left() {
return left;
}

public boolean isLeft() {


return left != null;
}

public boolean isRight() {


return right != null;
}

public B right() {
return right;
}

public static <A,B> Either<A,B> right(B b) {


return new Either<A,B>(null,b);
}

public void fold(F<A> leftOption, F<B> rightOption) {


if(right == null)
leftOption.f(left);
else
rightOption.f(right);
}
}

You can provide default values using Either


public static Either<Exception, Integer> parseNumberDefaults(final String s) {
if (! s.matches("[IVXLXCDM]+"))
return Either.left(new Exception("Invalid Roman numeral"));
else {
int number = new RomanNumeral(s).toInt();
return Either.right(new RomanNumeral(number >= MAX ? MAX :
number).toInt());
}
}

public static Option<Double> divide(double x, double y) {


if (y == 0)
return Option.none();
return Option.some(x / y);
}

---------------------------------------------
std::optional<int> str2int(string); // converts int to string if possible

int get_int_form_user()
{
string s;

for (;;) {
cin >> s;
std::optional<int> o = str2int(s); // 'o' may or may not contain an int
if (o) { // does optional contain a value?
return *o; // use the value
}
}
}

-----------------------------------------------------------------------------------
-------

printf("%.*s", length, string) To print a part of a string.


Or to print a non-nil-terminated string.
Or just use `fwrite(string, length, 1, stdout);
printing floats at variable precision.
printf("%.*f", precision, float)

struct functions
type Point struct {
int x;
int y;
}

func void Point_add(Point* p, int x) {


p.x = x;
}

func void example() {


Point p = { 1, 2 };

point_add(&p, 10);
}

--------------------------------------------------------
String Tokenizer -
https://stackoverflow.com/questions/53849/how-do-i-tokenize-a-string-in-c
http://www.cplusplus.com/faq/sequences/strings/split/#string-find_first_of
https://stackoverflow.com/questions/236129/how-do-i-iterate-over-the-words-of-a-
string

-------------------------------------------------
Alpha-beta pruning in one sentence:
If you want to compute max(8, min(5, ...), ...), you don't need to compute the rest
of the arguments to min, because they won't affect the value of the max.

-------------------------------------------------
Whether hPrevInstance was NULL or not told you whether you were the first copy of
the program. Under 16-bit Windows, only the first instance of a program registered
its classes; second and subsequent instances continued to use the classes that were
registered by the first instance. (Indeed, if they tried, the registration would
fail since the class already existed.) Therefore, all 16-bit Windows programs
skipped over class registration if hPrevInstance was non-NULL.
The people who designed Win32 found themselves in a bit of a fix when it came time
to port WinMain: What to pass for hPrevInstance? The whole module/instance thing
didn’t exist in Win32, after all, and separate address spaces meant that programs
that skipped over reinitialization in the second instance would no longer work. So
Win32 always passes NULL, making all programs believe that they are the first one.

/* previous instances do not exist in Win32 */


if (hPrevInstance)
return 0;

--------------------------------------------
Good approximation of y=x/(x+1) without division, from 0 to 1

float fastXDivXP1(float x) {
float x0 = x;
float x1 = x + 1.0f;

u32 i = *(u32 *)&x;


u32 i1 = *(u32 *)&x1;
u32 di = 0x3f7618e0 - (i1 - i);

x = *(float *)&di;
x1 = x - 1.0f; x = x + (x + x0*x1) * x1;
return x;
}

u32 = uint32_t

So if you don't need the last Newton-Raphson iteration, it has 0 multiplications.


Works because log(x/(x+1)) = log(x) - log(x+1).
--------------------------------------------
// Array Bound Check

T &operator[](size_t index) {
if (index >= size)
throw OUT_OF_RANGE; //#define OUT_OF_RANGE 0x0A
return array[index];
}

If you limit size of the array to the power of two, you will be able to use
masking. That is store additional value size_t mask; which is equal to size-1 (==
2^n-1). Then check may be done:

T &operator[](size_t index) {
return array[index & mask];
}

When you use array indexing, you are really using a pointer in disguise (called a
"reference"), that is automatically dereferenced. This is why instead of
*(array[1]), array[1] automatically returns the value at that value.

When you have a pointer to an array, like this:

int array[5];
int *ptr = array;

Then the "array" in the second declaration is really decaying to a pointer to the
first array. This is equivalent behavior to this:

int *ptr = &array[0];

namespace ninepoints { using buffer = std::vector }


or std::stretchy_buffer for std::vector

-------------------------------------------------------------------
static_cast<> is preferred to dynamic cast which would be : int n = (int)f; because
static cast is resolved during compilation so the dev will catch the error (if any)
during compilation. whereas dynamic cast is a runtime conversion so the developer
can catch the error only if it happens during runtime

You might also like