The site can be duplicated by the following instructions. I think/hope this is complete.


You will need to have the following installed

  • apache2
  • mariabd
  • php7
  • php7.0-mbstring
  • php7.0-mysql
  • libapache2-mod-php
  • npm
  • wkhtmltopdf
  • curl
  • git
  • composer
  • xvfb (if running on headless server)
  • phantomjs

The Base Code

Clone from GitHub

git clone

cd to 'iching' dir and run

composer install

From here on I refer to this 'iching' directory as both the 'DOC_ROOT' and the '~/ '

Tweak Apache2

enable mods expires and rewrite

a2enmod expires

a2enmod rewrite

Create a vhost config. This is mine.

<VirtualHost *:80>


<Directory "/home/jw/src/iching">

Require all granted

Options +Indexes

AllowOverride all


# needed for reading the iching.ini PHP config file

# it can be either "dev" or "prod"

SetEnv runtime dev


DocumentRoot "/home/jw/src/iching"

ErrorLog "/var/log/httpd/babelbrowser_error_log"

CustomLog "/var/log/httpd/babelbrowser_access_log" common

ExpiresActive On

ExpiresDefault "access plus 1 seconds"

#RewriteEngine on


#RewriteRule .* - [F]</VirtualHost>

# Make sure all cache is really off */

<filesMatch ".(php|html|js|css|tpl)$">

FileETag None

&lt;ifModule mod\_headers.c&gt;

  Header unset ETag

  Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"

  Header set Pragma "no-cache"

  Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"




you will need to set up a password file for this.


AuthType Basic

AuthName "Editor Access"

AuthUserFile /home/jw/sites/iching/.htpasswd

Require valid-user

RewriteEngine off

Edit the/etc/hosts file to add a locel server name for testing

Create the database

mysql -e "create database iching"

Clear FULLTEXT stop word list BEFORE inporting the data

NOTE: I have only tried this on INNODB tables, and even then with flakey results after much hassle.

NOTE: Only MySQL: 5.6 supports INNODB FULLTEXT searches. If you are running <5.6 you will need to either upgrade or convert to MyISAM.

Because the MySQL/MariaDB fulltext search's use of stop words is incompatible with this site's search, you'll need to disable them by add the following line in the /etc/mysql/my.cnf (or where ever yours is) , inside the [mysqld] section:






for MyISAM



Import the data

mysql iching < database_noopt.sql

Run the following commands in MySql client (edit as needed)

create user 'ichingDBuser'@'localhost' identified by 'aJU6sk1w3e';

grant all on hexagrams.* to 'ichingDBuser'@'localhost';

grant all on iching.* to 'ichingDBuser'@'%';


Code Igniter (for CRUD)

install code igniter under the DOC_ROOT (~/). Download it here ->

I renamed the dir to '~/cignite'

cd ~/cignite

composer install

Now you need to install grocery_crud ( download it to any tmp dir, unzip up, 'cd' into the dir it made (in my case 'grocery-crud-1.5.9', and create a zip file**

zip -r *

Move that zip to your Code Igniter dir (~/cignite), and unzip it there.

cd ${WEBROOT}/cignite


Then edit the application/config/database.php file


Create a file called iching.ini in the php modules directory (or whereever you think is best), usually under PHP config folder (in my case that was '/etc/php/7.0/mods-available/'

If you have various versions of php.ini under different directories, such as /etc/php5/cli, /etc/php5/apache, etc, create symlinks to each respective dir... much easier to manage.

ln -fs /etc/php/7.0/mods-available/iching.ini /etc/php/7.0/apache/conf.d/20-iching.ini

ln -fs /etc/php/7.0/mods-available/iching.ini /etc/php/7.0/cgi/conf.d/20-iching.ini

[iching] = "/home/jw/sites/iching" = "" = "jw" = "iching" = "localhost" = "ichingDBuser" = "1q28sjnk75GHw3e" = "/home/jw/sites/babelbrowser" = "" = "jw" = "babelbrowser" = "localhost" = "ichingDBuser" = "1q2sldjcnd*&w3e"


NOTE: The database import only imports the DEV database "iching", so only the *.dev.* vars are used, by default.

The server name is only used when testing from the command line. You may not need it but it must be there.

The iching.*.user is the user that the HTML to PDF converter will run as. This should be your normal user id.

But you must add the following line to /etc/sudoers file for it to have permission to run from the web user.

www-data ALL=(jw) NOPASSWD: /home/jw/src/iching/utils/

where www-data is the user the web server runs under.

jw= the username to run the script under, followed by the entire path to the makePdf.shscript.

To test that it is being read

php --ini|grep iching

you should get back a line that looks something like


To test the site for the command line, run

php-cgi -f ./index.php flipped=1 f_tossed=23 f_final=11

This way it is easy to see the errors.

If that works, you are ready to go to the website and test it out there. You might need to fix all the permissions as well. This script can be run from ~/


shopt -u dotglob

sudo chown -R jw *

sudo chown -R jw .git

sudo chown -R www-data questions

sudo chmod -R a+rw questions

sudo chown -R www-data id

sudo chmod -R a+rw id

sudo chown -R www-data astro

sudo chmod -R a+rw astro

sudo chmod -R 777 log


The code to build the PDFs

cd ~/lib/md2pdf

cp composer.json.UPDATED composer.json

composer install


'cd' to ~/book

git clone

the script~/book/MAKEis how the book is compiled.


PDF generation: To run on headless server you have to create a wrapper for wkhtmltopdf as follows

echo -e '#!/bin/bash\nxvfb-run -a --server-args="-screen 0, 1024x768x24" /usr/bin/wkhtmltopdf -q $*' > /usr/bin/

chmod a+x /usr/bin/

ln -s /usr/bin/ /usr/local/bin/wkhtmltopdf

You will also need to add the following lines to your /etc/sudoers' files to give them persmission to run as the web server user.

http ALL=(jw) NOPASSWD: /home/jw/src/iching/utils/

where http is the user the web server runs under. It could be www-data as well, and jw is the username it will run as (use your own)

Planetary calculations

Also put this line in /etc/sudoers as well

http ALL=(jw) NOPASSWD: /home/jw/src/iching/astro/

The enables enables the astrological functions.

You will you will also have to install phantomjs(>2.1.1-8) which is a 'Headless WebKit with JavaScript API', because all the astro calculations are done in javascript which, in this case, requites a display device. phantomjs is easily installed with

sudo apt-get install phantomjs (Ubuntu,Debian, etc)


sudo pacman -S phantomjs (Arch)

But if you can't you can download it here ->

It is expected to be in the /usr/bin dir

HotBits Server Testing

Note: To get access to the radioactive decay random number generator you need a 'HotBits' API key from the Fermi Lab.

Put something like this

2,17,32,47 * * * * /home/jw/sites/iching/utils/ >>/tmp/cron.log 2>&1

into your crontab to ensure that the status of the HotBits server is up to date. When the serve is down, the option is removed from the web form.

Replacing database editor

Because the default CKeditor in Grocery_CRUB inserts HTML tags into the content automatically, it needs to be relaced with an editor that does not do that. I use MarkItUp.

from a temp folder check out MarkItUp.

git clone

This creates the subdir 1.x (terrible name for a sub dir, but I'll keep it)

cd 1.x

bower install

mv 1.x/markitup ~/cignite/assets/grocery_crud/texteditor

mv 1.x/images ~/cignite/assets/grocery_crud/ (not sure iof this is necessary)

edit ~/application/config/grocery_crud.php and change

$config['grocery_crud_default_text_editor'] = 'ckeditor';


$config['grocery_crud_default_text_editor'] = 'markitup';

The new editor is installed


Because show.php reads the formatted html pages and extracts the DOM objects, in orde to get the imags to work and not give a 404, you need to symlink the images files to the assets folder

ln -fs ${DOC_ROOT}/images/hex/small ${DOC_ROOT}/assets


There is a fairly crude node.js script in ~/test/crawler/ (call testcrawler.js and testcrawler200.js) that looks for a "200" response from the pages, searches for a word on the page, and checks the existance of a PDF file. You need to edit the shell script and change the SITE variable.

This requires node.js and npm.

pacman -S nodejs


apt-get install nodejs

in~/test/crawlerinstall all the packages listed inpackage.json

npm install

Best not to install globally as it confuses the package manager.

You can also easily use somethng like wget to crawl tthe site and test for bad links


To create a sitemal.xml I used the following Perl script

${DOCROOT}/test/crawler/sitemap/ --config=${DOCROOT}/test/crawler/sitemap/config.xml

The code and a MAKE script are in ${DOCROOT}/test/crawler/sitemap/

results matching ""

    No results matching ""