| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
This patch cleans up the source code to satisfy the coding guidelines (see
http://wiki.splitbrain.org/wiki:development#coding_style)
It converts files to UNIX lineendings and removes tabs and trailing
whitespace. Not all files were cleaned yet.
darcs-hash:20060217222040-7ad00-bba3d2bee3b5aa7cbb5184258abd50805cd071bf.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Now searching for word parts is possible by adding or prepending a *
character to the searchword:
'foo*' searches for words beginning with 'foo' eg. 'foobar'
'*foo' looks for words ending in 'foo' eg. 'barfoo'
'*foo*' gets anything with 'foo' in it eg. 'barfoobaz'
darcs-hash:20051127180723-7ad00-1eb29e812ddaf38d9812697bb1cffffe9a5fb330.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new option accepts a RegExp to filter certain pages from all automatic
listings (RSS, recent changes, search results, index). This is useful to
exclude certain pages like the ones used in the sitebar templates. The
regexp is matched against the full page ID with a leading colon. If it
matches the page is assumed to be a hidden one.
IMPORTANT: this is not related to ACL. A hidden page is still visible to all
users (if not restricted by ACL) when linked or called directly.
darcs-hash:20051103101726-6e07b-8d45912a1b4f6cfc9e3fce147c15f84a58ea7ca2.gz
|
|
|
|
|
|
|
|
|
| |
The new handling of asian chars as single words needs a recent PCRE library
(PHP 4.3.10 is known work). If this support isn't available the regexp
compilation will fail. This patch adds a workaround - this means the search
will not work as expected with asian words on older PHP versions.
darcs-hash:20051009124833-7ad00-1319829be5cb73246e13eb65e4c950d43c6ce5bf.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Asian languages do not use spaces to seperate words. The indexer however does
a word based lookup. Splitting for example Japanese texts into real words is
only possible with complicated natural language processing, something
completely out of scope for DokuWiki.
This patch solves the problem by treating all asian characters as single
words. When an asian word (consisting of multiple characters) is searched it
is treated as a phrase search, looking up each charcter by it self first,
then checking for the phrase in found documents.
darcs-hash:20050925175451-7ad00-933b33b51b5f2fa05e736c18b8db58a5fdbf41ce.gz
|
|
|
|
| |
darcs-hash:20050925102211-7ad00-200edd676ba3956f03ec5bcc5149d4aa4bd15e24.gz
|
|
|
|
| |
darcs-hash:20050921195118-7ad00-9070166cbaa26e3f27f7b92382346a70f5c479a1.gz
|
|
|
|
| |
darcs-hash:20050912143027-7ad00-b2f3165d8db7122a453ecc63ad031af4467f691f.gz
|
|
|
|
| |
darcs-hash:20050912141042-7ad00-5ef43525c9fd7ba44206720c54bb566450f93250.gz
|
|
|
|
| |
darcs-hash:20050903220229-7ad00-5d95f905eaeb3f6b867aa3ee43c2a8bccc533c00.gz
|
|
The new search function was added but is not yet integrated into
DokuWikis interface.
darcs-hash:20050828152821-7ad00-a6e79a9dc5aaf41c547cf42dccdbc3b5bc8d303e.gz
|