Fix getlibdirs when libdirs is unset
Simplify libdirs: libdirs must be plural, libdir must be a single string This makes the documentation read more sensibly, and matches how we handle underlaydirs and underlaydir.
Merge remote-tracking branch 'spalax/paternal/libdirs'
Make getlibdirs return an array (or whathever this type is called in perl)
Allow several extra library and plugin directories (libdir option)
entab
in debug mode, issue a warning before waiting for a lock
page.tmpl: tell mobile browsers we have a responsive layout, unless told not to Mobile browsers typically assume that arbitrary web pages are designed for a "desktop-sized" browser window (around 1000px) and display that layout, zoomed out, in order to avoid breaking naive designs that assume nobody will ever look at a website on a phone or something. People who are actually doing "responsive design" need to opt-in to mobile browsers rendering it at a more normal size.
Always produce HTML5 doctype and new attributes, but not new elements According to caniuse.com, a significant fraction of Web users are still using Internet Explorer versions that do not support HTML5 sectioning elements. However, claiming we're XHTML 1.0 Strict means we can't use features invented in the last 12 years, even if they degrade gracefully in older browsers (like the role and placeholder attributes). This means our output is no longer valid according to any particular DTD. Real browsers and other non-validator user-agents have never cared about DTD compliance anyway, so I don't think this is a real loss.
Set default User-Agent to something that doesn't mention libwww-perl It appears that both the open-source and proprietary rulesets for ModSecurity default to blacklisting requests that say they are from libwww-perl, presumably because some script kiddies use libwww-perl and are too inept to set a User-Agent that is "too big to blacklist", like Chrome or the iPhone browser or something. This seems doomed to failure but whatever.
Add reverse_proxy option which hard-codes cgiurl in CGI output This solves several people's issues with the CGI trying to be too clever when IkiWiki is placed behind a reverse-proxy.
Avoid mixed content when cgiurl is https but url is not
Use protocol-relative URIs if cgiurl and url differ only by authority (hostname)
Merge branch 'ready/templatebody'
add more wording based on what chrysn suggested
Merge branch 'ready/document-success-reason'
Merge branch 'ready/trail-sort'
Make --no-gettime work in initial build. Closes: #755075
trail: don't generate a costly dependency when forcing sort order pagespec_match_list() makes the current page depend on the pagespec being matched, so if you use [[!trailoptions sort="..."]] to force a sort order, the trail ends up depending on internal(*) and is rebuilt whenever anything changes. Add a new sort_pages() and use that instead.
Track whether we're in the scan or render phase In the scan phase, it's too early to match pagespecs or sort pages; in the render phase, both of those are OK. It would be possible to add phases later, renumbering them if necessary to maintain numerical order.