This [lib/tm.conf] file needs to be readable by the web server but should not be served by the web server. Configure Apache to ensure this.How do I do this exactly? I tried something in httpd.conf, and I believe I even restarted httpd; no luck.
\n"; mysqli_report(MYSQLI_REPORT_STRICT); try { $tmdb = new mysqli($tmdbhost, $tmdbuser, $tmdbpasswd, $tmdbname); } catch ( Exception $e ) { //echoecho "I'm still too new to sql & php, and how to configure it for TM, to know what's going on here. :)
Failed to connect to database ".$tmdbname." on ".$tmdbhost." Please try again later.
"; exit; } $result = $tmdb->query("SELECT * FROM graphs ORDER BY descr, format"); $counter = 0; $prevRow; while ($counter < $result->num_rows) { // get three entries: collapsed, simple, then traveled for each $crow = $result->fetch_assoc(); $srow = $result->fetch_assoc(); $trow = $result->fetch_assoc(); $counter += 3; if ($crow == NULL || $srow == NULL || $trow == NULL) { // should produce some kind of error message continue; } // build table row (was: class=collapsed) echo "\n"; echo "\n"; echo "\n"; } $values = array(); $descr = array(); $result = $tmdb->query("SELECT * FROM graphTypes"); while ($row = $result->fetch_array()) { array_push($values, $row[0]); array_push($descr, $row[1]); } ?>
File | Size in Web repo | Size on noreaster |
jquery.tablesorter.pager.js | (not present) | 4077 |
tmjsfuncs.js | 36057 | 36061 |
tmphpfuncs.php | 15390 | 15408 |
TravelMapping
travmap
[password]
localhost
vsTravelMappingCopy
travmap
[password]
localhost
?
Ingesting the .sql file took a bit over an hour on my machine. I don't understand why; my CPU use meters were all barely above 0% during the process. If disk access were the bottleneck here, it should have only taken ~20s to read the file... Jim, is this also your experience?https://stackoverflow.com/questions/29643714/improve-speed-of-mysql-import
It rather dwarfs the time spent running the siteupdate progam itself. I wonder whether playing around with these (https://github.com/TravelMapping/DataProcessing/blob/347edf5ae31cdbd1d6aacd6178d8e3b02cd2b132/siteupdate/python-teresco/siteupdate.py#L3669-L3672) values (https://github.com/TravelMapping/DataProcessing/blob/347edf5ae31cdbd1d6aacd6178d8e3b02cd2b132/siteupdate/python-teresco/siteupdate.py#L3751-L3754) here (https://github.com/TravelMapping/DataProcessing/blob/347edf5ae31cdbd1d6aacd6178d8e3b02cd2b132/siteupdate/python-teresco/siteupdate.py#L3764-L3767) would have an effect on performance...
Thanks for all of these. I think it's very good to ensure we can install on another server and keep the documentation and code updated to facilitate this. I'm making some fixes and will add some things to the README.md.I'd also mention:
The DB takes under 5 minutes to ingest on noreaster. I know it's a pretty high-end server with lots of memory but I'm amazed to see such a difference on yours.I don't think memory's coming into play in my case. Memory usage tops out at about 1.3 GB (out of 8) while importing, so I'm not paging. Increasing the value of innodb_buffer_pool_size had no effect.
Looks like I should also try those DB options and might be able to get site update time down significantly.In the meantime, innodb_flush_log_at_trx_commit=0 helped a lot of course. Playing with innodb_flush_log_at_timeout also looks promising; I'll give that a go shortly. And then, the other recommendations at the links above that I've not even looked into yet.
3.5 inch 2TB SATA 7.2k RPM HDD,FPWS
da0 at mpr0 bus 0 scbus0 target 3 lun 0
da0: <ATA ST2000DM001-1ER1 CC27> Fixed Direct Access SPC-4 SCSI device
da0: Serial Number Z4Z8SHCY
da0: 600.000MB/s transfers
da0: Command Queueing enabled
da0: 1907729MB (3907029168 512 byte sectors)
da0: quirks=0x8<4K>
More README.md enhancements just in, thanks for the suggestions.
Note that the `fonts` directory is not updated by this script, and those files will need to be transferred separately.On the one hand, maybe put it into updateserver.sh, if it's recommended to initially populate the root directory?
The JS files for these should be placed into a location so they can be read by the code generated by `tm_common_js` in `lib/tmphpfuncs.php`.Would it be more clear to specify just putting the leaflet directory in the root directory?
For DB optimization, I'm hesitant to change settings just yet since I don't want to pay any price on reads to speed up writes. But I think it's very useful to investigate and try them at some point.The one setting I've changed so far at least, I don't believe is something that affects reads. IIUC it can affect how hosed your data is in the case of a crash. In my case, with every update I'll be repopulating the DB essentially from scratch, and I don't care about that. :)
Definitely worth checking on optimizing the number of entries per INSERT,Going to try this out next.
and given that noreaster has all those cores,ISTR you once referring to 20 HyperThreaded cores. Searched the forum, GitHub, emails, can't find the post.
breaking into multiple files that can be ingested in parallel could be a big win.Haven't really read up on the mysqlimport link I posted. In any case, there are multiple ways to skin this cat.
The leaflet location is in part to help HDX find it easily, but I'm open to reorganization for ease of configutation/update.I think its current location in DocumentRoot makes sense.
Updates on the updates page are still sorted by date, but in a different order within: the two CZE II603 entries from 2019-06-24 are swapped, as are the two WV WV817 entries from 2019-04-21. Appears deterministic on each site; reloading does not change it. On each server, one pair of entries is kept in CSV order, and the other swapped. Not a deal-breaker by any means, just something odd I noticed.More of a deal-breaker: Right now, as of
I've got a mirror running at http://205.209.84.174, so we won't have to go without our fix for a day. :)
// get the timestamp of most recent DB update
function tm_update_time() {
global $tmdb;
global $tmdbname;
$sql_command = "SELECT create_time FROM information_schema.tables WHERE TABLE_SCHEMA = '".$tmdbname."' ORDER BY create_time DESC;";
global $tmsqldebug;
if ($tmsqldebug) {
echo "<!-- SQL: ".$sql_command." -->\n";
}
$res = tmdb_query($sql_command);
$row = $res->fetch_assoc();
$ans = $row['create_time'];
$res->free();
return $ans;
}
mysql> SELECT create_time FROM information_schema.tables WHERE TABLE_SCHEMA = 'TravelMapping' ORDER BY create_time DESC;
+---------------------+
| CREATE_TIME |
+---------------------+
| 2019-07-09 10:52:07 |
| 2019-07-09 10:52:07 |
| 2019-07-09 10:52:06 |
| 2019-07-09 10:52:06 |
| 2019-07-09 10:52:05 |
| 2019-07-09 10:51:57 |
| 2019-07-09 10:51:51 |
| 2019-07-09 10:51:49 |
| 2019-07-09 10:51:48 |
| 2019-07-09 10:51:48 |
| 2019-07-09 10:51:48 |
| 2019-07-09 10:50:42 |
| 2019-07-09 10:50:16 |
| 2019-07-09 10:50:14 |
| 2019-07-09 10:49:51 |
| 2019-07-09 10:49:49 |
| 2019-07-09 10:49:46 |
| 2019-07-09 10:49:46 |
| 2019-07-09 10:49:46 |
| 2019-07-09 10:49:46 |
| 2019-07-09 10:49:46 |
+---------------------+
21 rows in set (0.00 sec)
I'm not sure what's happening.